Research Hypothesis In Psychology: Types, & Examples

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

A research hypothesis, in its plural form “hypotheses,” is a specific, testable prediction about the anticipated results of a study, established at its outset. It is a key component of the scientific method .

Hypotheses connect theory to data and guide the research process towards expanding scientific understanding

Some key points about hypotheses:

  • A hypothesis expresses an expected pattern or relationship. It connects the variables under investigation.
  • It is stated in clear, precise terms before any data collection or analysis occurs. This makes the hypothesis testable.
  • A hypothesis must be falsifiable. It should be possible, even if unlikely in practice, to collect data that disconfirms rather than supports the hypothesis.
  • Hypotheses guide research. Scientists design studies to explicitly evaluate hypotheses about how nature works.
  • For a hypothesis to be valid, it must be testable against empirical evidence. The evidence can then confirm or disprove the testable predictions.
  • Hypotheses are informed by background knowledge and observation, but go beyond what is already known to propose an explanation of how or why something occurs.
Predictions typically arise from a thorough knowledge of the research literature, curiosity about real-world problems or implications, and integrating this to advance theory. They build on existing literature while providing new insight.

Types of Research Hypotheses

Alternative hypothesis.

The research hypothesis is often called the alternative or experimental hypothesis in experimental research.

It typically suggests a potential relationship between two key variables: the independent variable, which the researcher manipulates, and the dependent variable, which is measured based on those changes.

The alternative hypothesis states a relationship exists between the two variables being studied (one variable affects the other).

A hypothesis is a testable statement or prediction about the relationship between two or more variables. It is a key component of the scientific method. Some key points about hypotheses:

  • Important hypotheses lead to predictions that can be tested empirically. The evidence can then confirm or disprove the testable predictions.

In summary, a hypothesis is a precise, testable statement of what researchers expect to happen in a study and why. Hypotheses connect theory to data and guide the research process towards expanding scientific understanding.

An experimental hypothesis predicts what change(s) will occur in the dependent variable when the independent variable is manipulated.

It states that the results are not due to chance and are significant in supporting the theory being investigated.

The alternative hypothesis can be directional, indicating a specific direction of the effect, or non-directional, suggesting a difference without specifying its nature. It’s what researchers aim to support or demonstrate through their study.

Null Hypothesis

The null hypothesis states no relationship exists between the two variables being studied (one variable does not affect the other). There will be no changes in the dependent variable due to manipulating the independent variable.

It states results are due to chance and are not significant in supporting the idea being investigated.

The null hypothesis, positing no effect or relationship, is a foundational contrast to the research hypothesis in scientific inquiry. It establishes a baseline for statistical testing, promoting objectivity by initiating research from a neutral stance.

Many statistical methods are tailored to test the null hypothesis, determining the likelihood of observed results if no true effect exists.

This dual-hypothesis approach provides clarity, ensuring that research intentions are explicit, and fosters consistency across scientific studies, enhancing the standardization and interpretability of research outcomes.

Nondirectional Hypothesis

A non-directional hypothesis, also known as a two-tailed hypothesis, predicts that there is a difference or relationship between two variables but does not specify the direction of this relationship.

It merely indicates that a change or effect will occur without predicting which group will have higher or lower values.

For example, “There is a difference in performance between Group A and Group B” is a non-directional hypothesis.

Directional Hypothesis

A directional (one-tailed) hypothesis predicts the nature of the effect of the independent variable on the dependent variable. It predicts in which direction the change will take place. (i.e., greater, smaller, less, more)

It specifies whether one variable is greater, lesser, or different from another, rather than just indicating that there’s a difference without specifying its nature.

For example, “Exercise increases weight loss” is a directional hypothesis.

hypothesis

Falsifiability

The Falsification Principle, proposed by Karl Popper , is a way of demarcating science from non-science. It suggests that for a theory or hypothesis to be considered scientific, it must be testable and irrefutable.

Falsifiability emphasizes that scientific claims shouldn’t just be confirmable but should also have the potential to be proven wrong.

It means that there should exist some potential evidence or experiment that could prove the proposition false.

However many confirming instances exist for a theory, it only takes one counter observation to falsify it. For example, the hypothesis that “all swans are white,” can be falsified by observing a black swan.

For Popper, science should attempt to disprove a theory rather than attempt to continually provide evidence to support a research hypothesis.

Can a Hypothesis be Proven?

Hypotheses make probabilistic predictions. They state the expected outcome if a particular relationship exists. However, a study result supporting a hypothesis does not definitively prove it is true.

All studies have limitations. There may be unknown confounding factors or issues that limit the certainty of conclusions. Additional studies may yield different results.

In science, hypotheses can realistically only be supported with some degree of confidence, not proven. The process of science is to incrementally accumulate evidence for and against hypothesized relationships in an ongoing pursuit of better models and explanations that best fit the empirical data. But hypotheses remain open to revision and rejection if that is where the evidence leads.
  • Disproving a hypothesis is definitive. Solid disconfirmatory evidence will falsify a hypothesis and require altering or discarding it based on the evidence.
  • However, confirming evidence is always open to revision. Other explanations may account for the same results, and additional or contradictory evidence may emerge over time.

We can never 100% prove the alternative hypothesis. Instead, we see if we can disprove, or reject the null hypothesis.

If we reject the null hypothesis, this doesn’t mean that our alternative hypothesis is correct but does support the alternative/experimental hypothesis.

Upon analysis of the results, an alternative hypothesis can be rejected or supported, but it can never be proven to be correct. We must avoid any reference to results proving a theory as this implies 100% certainty, and there is always a chance that evidence may exist which could refute a theory.

How to Write a Hypothesis

  • Identify variables . The researcher manipulates the independent variable and the dependent variable is the measured outcome.
  • Operationalized the variables being investigated . Operationalization of a hypothesis refers to the process of making the variables physically measurable or testable, e.g. if you are about to study aggression, you might count the number of punches given by participants.
  • Decide on a direction for your prediction . If there is evidence in the literature to support a specific effect of the independent variable on the dependent variable, write a directional (one-tailed) hypothesis. If there are limited or ambiguous findings in the literature regarding the effect of the independent variable on the dependent variable, write a non-directional (two-tailed) hypothesis.
  • Make it Testable : Ensure your hypothesis can be tested through experimentation or observation. It should be possible to prove it false (principle of falsifiability).
  • Clear & concise language . A strong hypothesis is concise (typically one to two sentences long), and formulated using clear and straightforward language, ensuring it’s easily understood and testable.

Consider a hypothesis many teachers might subscribe to: students work better on Monday morning than on Friday afternoon (IV=Day, DV= Standard of work).

Now, if we decide to study this by giving the same group of students a lesson on a Monday morning and a Friday afternoon and then measuring their immediate recall of the material covered in each session, we would end up with the following:

  • The alternative hypothesis states that students will recall significantly more information on a Monday morning than on a Friday afternoon.
  • The null hypothesis states that there will be no significant difference in the amount recalled on a Monday morning compared to a Friday afternoon. Any difference will be due to chance or confounding factors.

More Examples

  • Memory : Participants exposed to classical music during study sessions will recall more items from a list than those who studied in silence.
  • Social Psychology : Individuals who frequently engage in social media use will report higher levels of perceived social isolation compared to those who use it infrequently.
  • Developmental Psychology : Children who engage in regular imaginative play have better problem-solving skills than those who don’t.
  • Clinical Psychology : Cognitive-behavioral therapy will be more effective in reducing symptoms of anxiety over a 6-month period compared to traditional talk therapy.
  • Cognitive Psychology : Individuals who multitask between various electronic devices will have shorter attention spans on focused tasks than those who single-task.
  • Health Psychology : Patients who practice mindfulness meditation will experience lower levels of chronic pain compared to those who don’t meditate.
  • Organizational Psychology : Employees in open-plan offices will report higher levels of stress than those in private offices.
  • Behavioral Psychology : Rats rewarded with food after pressing a lever will press it more frequently than rats who receive no reward.

Print Friendly, PDF & Email

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Psychol

What Is Causal Cognition?

While gaining an understanding of cause-effect relations is the key goal of causal cognition, its components are less clearly delineated. Standard approaches in the field focus on how individuals detect, learn, and reason from statistical regularities, thereby prioritizing cognitive processes over content and context. This article calls for a broadened perspective. To gain a more comprehensive understanding of what is going on when humans engage in causal cognition—including its application to machine cognition—it is argued, we also need to take into account the content that informs the processing, the means and mechanisms of knowledge accumulation and transmission, and the cultural context in which both accumulation and transmission take place.

Introduction

Causality is the relation between two events, one of which is the consequence (or effect ) of the other ( cause ). Gaining an understanding of such cause-effect relations is of prime concern for humans, starting in infancy with a drive to explore one’s world and test one’s assumptions ( Gopnik et al., 1999 ; Muentener and Bonawitz, 2017 ). Indeed, the ability to attain causal understanding and harness it for diagnoses, predictions, and interventions is so advantageous that it has been considered the main driving force in human evolution ( Stuart-Fox, 2015 ; Lombard and Gärdenfors, 2017 ).

While understanding is arguably the key goal of causal cognition, its components are less clearly delineated. So, what exactly is causal cognition? Or rather, how should we conceptualize it from a cognitive science point of view? As will be detailed in the next section, a great deal of approaches in this field focuses on the detection of and reasoning from statistical regularities. Taking this rather narrow focus as the starting point, I will advocate a broader perspective on causal cognition, which also factors in its distinctly human characteristics, specifically the crucial roles of content, knowledge transmission, and culture. Implications for the field—including application to machine cognition—will be discussed prior to the conclusion.

Perspectives on Causal Cognition

The preamble for this research topic outlines causal cognition as the ability “to perceive and reason about […] cause-effect relations.” 1 This outline largely reflects what may be seen as the “standard view” in cognitive and social psychology. In the following, this view will be fleshed out, before addressing the dimensions along which it needs to be extended.

The Standard View

Precise definitions of causal cognition are hard to come by. Scholars tend to presume that the term is self-explanatory and hence only mention in passing what they are actually focusing on. Nevertheless, a reasonably reliable impression can be gleaned from the first five publications that pop up when “causal cognition” is entered into Google Scholar (with jointly 1,280 citations in total, as of 12 August 2019, sorted by relevance).

The three publications which come from cognitive and comparative psychology cast causal cognition as the understanding of causal mechanisms ( Zuberbühler, 2000 ; Penn and Povinelli, 2007 ) and as representations of the causal relation between action and outcome ( Dickinson and Balleine, 2000 ). That is, concealed by the more generic term “causal cognition,” the subject of the respective works is actually confined to just a few aspects, each of which has an entire research tradition devoted to it: perception ( Michotte, 1963 ; Saxe and Carey, 2006 ), learning ( Shanks et al., 1996 ; Gopnik et al., 2004 ), and reasoning ( Blaisdell et al., 2006 ; Waldmann, 2017 ).

Social psychologists add attribution, as their topic of core concern, to this range of cognitive processes ( Norenzayan and Nisbett, 2000 ), that is, explanations of social behavior in terms of dispositional and/or contextual factors ( Kelley, 1973 ; Choi et al., 1999 ). The cognitive and the social tradition essentially differ in terms of the explanandum —a change as the outcome of an event or of one’s actions, versus an account of why people behave in a certain way—but they both conceptualize causal cognition as consisting of mental processes.

While some scholars emphasize the domain-general nature of these processes, others consider domain boundaries to be relevant for distinguishing different types of causal cognition ( Morris and Peng, 1994 ). And some even argue for the existence of domain-specific modules devoted to reasoning distinctly about physical, biological, and social/psychological events ( Leslie, 1994 ; Spelke and Kinzler, 2007 ). Domains in this sense are defined by the distinct properties of their key entities and the causal principles accounting for their behavior. Objects in the physical domain, for instance, move when propelled by external forces in line with mechanistic principles, whereas the inhabitants of the biological domain are able to move of their own accord, in line with vitalistic principles. These different principles motivate a conceptual distinction between the constructs of cause (as eliciting a physical effect) versus reason (as motivating behavior), and between cognitive processes devoted to physical causation (like perception and reasoning) versus those devoted to social agency (like attribution and ascription of responsibility).

Only one of the five above-mentioned publications, a multidisciplinary compilation of 20 contributions on causal cognition ( Sperber et al., 1995 ), outlines a broader range of perspectives, regarding both the processes and factors involved and the domains considered.

A More Comprehensive View

Some core components of causal cognition, like learning based on statistical regularities, are firmly rooted in our evolutionary past: They are present in non-human animals, they are observable in human infants, and they enabled our ancestors to move out of their original habitat and spread around the globe ( Bender, 2019 ). Even these shared roots, however, do not render causal cognition a uniform phenomenon. Relevant abilities in infants already transcend those of our closest relatives in several ways. Causal cognition in humans is characterized, inter alia , by the integration of content information into theory-like representations, with serious implications for processing. This role of content and the means by which it is incorporated will be outlined in more detail in the following.

The Role of Content for Processing

As noted above, the bulk of research on causal cognition focuses on processing while abstracting from content. As one consequence, methods prioritize artificial tasks in laboratory settings, involving toys and other stimuli designed for the very purpose of bearing no similarity to anything with which participants may be familiar (e.g., Gopnik and Sobel, 2000 ). Confronted with a meaningless pattern of statistical regularities, the participant’s task is to diagnose the underlying causal relations. Oddly enough, the very reason for doing so is that content plays such an overwhelming role in human causal cognition that, to be able to isolate the “pure” processes underlying it, detaching these processes from content appears indispensable.

The most abstract form of content is a structural model of the causal relations involved (e.g., whether they constitute a simple chain or a more complex network), and even rats have been shown to form such deeper causal representations, which lead their learning and reasoning ( Blaisdell et al., 2006 ). When available, knowledge and beliefs on properties of items, on dependencies between them, or even on underlying mechanisms of causation inform these representations of structure. Pieces of knowledge are themselves embedded in mental models of how things work, which in turn guide tool use, decision-making, and problem-solving. For instance, rich knowledge on a domain affords reasoning strategies based on causal mechanisms, rather than category-based induction ( Medin and Atran, 2004 ); and beliefs on causal mechanisms affect not only what, but also how, people decide ( Kempton, 1986 ; Dörner, 1996 ; Güss and Robinson, 2014 ). On a higher level still, these various sorts of representations are organized by framework theories. Framework theories are ontological perspectives on the world, enriched with cultural values, that motivate interpretations, inferences, and intentions ( Bang et al., 2007 ). They affect, for instance, how information is filed in long-term memory, whether reasoning is biased by typicality and diversity effects, or on which principles domain boundaries are drawn ( Medin and Atran, 2004 ; ojalehto et al., 2017a , b ). This need not imply that causal models are uniform or coherent; in fact, apparently incompatible accounts can co-exist in an individual’s mind and are selectively accessed depending on contextual cues ( Astuti and Harris, 2008 ; Legare and Gelman, 2008 ).

In other words, content impacts on processing. If, however, the integration of knowledge and beliefs into theory-like representations is indeed so essential and decisive, accounts of human causal cognition cannot afford to disregard content.

The Role of Knowledge Transmission for Content

A great deal of knowledge about causation can be gleaned from an individual’s interactions with the world, and observing statistical regularities may render a reasonably accurate model of causal relations, for instance when trying to diagnose and treat a common cold. Still, accounting for the underlying mechanisms is replete with interpretation and, often enough, pure speculation. The more elaborate such accounts are, the more likely they therefore are to encompass large portions that we simply learned from other people ( D’Andrade, 1995 ).

While learning from others is not an exclusively human ability, the extent to which our species capitalizes on it is indeed unique. Even as young children, humans pay specific attention to social cues ( Kushnir et al., 2008 ), and when copying problem-solving behavior, they “over-imitate,” by prioritizing conventional aspects over mechanistic aspects, whether or not the former are causally relevant ( Lyons et al., 2007 )—a tendency that further increases into adulthood ( McGuigan et al., 2011 ). Humans not only actively seek information, but are also willing to convey it. This willingness arises from our disposition for shared intentionality, for teaching, and for learning from teaching ( Tomasello et al., 2005 ; Csibra and Gergely, 2009 ).

In contrast to the acquisition of behavioral patterns and action-based problem-solving, teaching is indispensable for the explicit transmission of knowledge, particularly for knowledge on a subject that is as invisible and ephemeral as causality ( Waldmann et al., 2006 ). With language, humans have developed the most powerful tool in the entire animal kingdom for achieving this—a tool that young children already exploit in full when they ask for causal explanations, and persist in requesting more explanations if they are not satisfied with the previous ones ( Callanan and Oakes, 1992 ; Frazier et al., 2009 ).

Given its key role for knowledge accumulation, the impact of language and its usage on causal cognition should not be underestimated. Sometimes, a linguistic label may be sufficient to serve as a cue for causal assumptions (as is the case with the common cold, which, according to popular belief, is caused by exposure to cold weather). But language use can also affect cognition more subtly, through the ways in which information about causal relations and events is encoded, or in how event descriptions are linguistically prepacked or split into their components ( Wolff et al., 2009 ; Bohnemeyer et al., 2010 ). For instance, while “the climate is changing” and “humans are changing the climate” both describe the same event, the two linguistic constructions still suggest slightly diverging causal perspectives, one focusing on the event, and the other on the agent. Such modifications of the linguistic framing are able to redirect people’s attention to, in this case, event or agent ( Fausey et al., 2010 ); to alter their inferences on causal efficacy ( Kuhnmünch and Beller, 2005 ); to sway their memories of something they themselves observed ( Loftus and Palmer, 1974 ; Fausey et al., 2010 ); or to affect their assignment of agency, responsibility, and blame ( Fausey and Boroditsky, 2010 ; Bender and Beller, 2017 ).

In other words, content consists of knowledge that is socially accumulated and transmitted, frequently through explicit teaching using language. If, however, transmission is so crucial for content generation, with the means of transmission affecting causal representations and processing, accounts of human causal cognition cannot afford to disregard the role and the characteristics of the mechanisms involved.

The Role of Culture for Knowledge Transmission

Transmission of knowledge typically takes place within a social context. Social orientations and cultural practices therefore impact on every step of it: the bits and pieces of knowledge transmitted, the means of transmission, and the specific details of the transmission process itself.

As noted above, the bulk of people’s knowledge and beliefs is learned from others and hence bears the stamp of the cultural setting in which it emerged and is transmitted. Cultural shaping is amplified insofar as knowledge and beliefs are accumulated over time and integrated into larger models and framework theories ( Bang et al., 2007 ). Cultural framework theories not only provide distinct ontological perspectives, and hence endow meaning to the causal accounts of the very same event in notably different ways, but even entail different ways of partitioning the world into domains. The ontological perspective implicit in most Western framework theories, for instance, suggests partitioning into a physical, a biological, and a social-psychological domain, largely based on properties of their key entities and on corresponding principles for agency ascription ( Carey, 1996 , 2009 ; Spelke and Kinzler, 2007 ). The ontological perspective implicit in Amerindian framework theories, by contrast, emphasizes interconnectedness between entities, and hence suggests principles for agency ascription that are grounded in relations rather than properties, and that give rise to domains based on communication and exchange ( ojalehto et al., 2017a , b ).

As a consequence, causal cognition is infused with culture. People therefore differ in whether they engage in causal considerations on a regular basis ( Beer and Bender, 2015 ), and in how they weigh consequences versus causes ( Choi et al., 2003 ; Maddux and Yuki, 2006 ). They also differ in the principles in which category and domain boundaries are grounded ( ojalehto et al., 2017a , b ), and in the concepts that inform their explanations ( Beller et al., 2009 ). Even the biases that affect inferences differ across cultures ( Medin and Atran, 2004 ; Bender and Beller, 2011 ). Factors contributing to these differences include, among others, the cultural shaping of the settings in which causal cognition occurs; the extent to which socialization patterns and teaching strategies encourage or discourage exploration and requests for explanation; the culture-specific organization of causally relevant knowledge, concepts, and categories; and the language-specific encoding of causal relations in grammatical structure (for reviews, see Bender et al., 2017 ; Bender and Beller, 2019 ).

In other words, knowledge transmission is ingrained in culture. If, however, the accumulation and propagation of information is so dependent on cultural practices and institutions, accounts of human causal cognition cannot afford to disregard its cultural fabric.

Implications for Studying Causal Cognition in Humans and Machines

While causality might be objective, and our interest in it phylogenetically old, neither of the two is set in stone. As demonstrated by Iliev and colleagues ( Iliev and ojalehto, 2015 ; Iliev and Axelrod, 2016 ), the extent of our concern with causality has changed over time—even over the course of just one century—and so too has the usage of the corresponding vocabulary and concepts. Here, I argue that our scientific notions of causal cognition can, and in fact must, change as well.

Research on causal cognition has typically focused on how humans gain explanations for what is going on in the world. In so doing, it often reduces causal cognition to a few cognitive processes involved in perception, learning, reasoning, and attribution, which are investigated devoid of content or context. Yet, to achieve a more comprehensive understanding of what is going on when humans engage in causal cognition, we also need to take into account the content that informs the cognitive processing, the means and mechanisms of knowledge accumulation and transmission, and the cultural context in which both accumulation and transmission take place. All of these aspects are unique to, and constitutive of, human causal cognition, and have serious implications for how we study causal cognition in humans and machines.

As a first consequence, we may wish to acknowledge more phenomena as components of causal cognition than just the inferences drawn from patterns of statistical regularities. Included should be, inter alia , verbal accounts, principles for categorization, tool use in daily life, problem-solving in complex situations, or judgments of blameworthiness and punishment. Concurrently, the segregation between the physical and the social domain—and hence between causation and agency—should be abolished as arguably culture-specific categorizations.

As a second consequence, we may wish to reconsider the methods we apply for investigating causal cognition. The repertoire of research strategies should be extended beyond philosophical reflections and sterile lab experiments, to also include statistical analyses of linguistic data, in-depth within-culture analyses of cognitive concepts, processes, and changes over time, ethnographic observations, or cross-cultural and cross-linguistic studies ( Bender and Beller, 2016 ). Moreover, stronger efforts should be undertaken to increase the ecological validity afforded by our tools and settings.

A third consequence arises for attempts to model human causal cognition in machines. The recent exceptional progress in the area of artificial intelligence is largely thanks to the harnessing of deep learning for pattern recognition. Basically reflecting the “standard view” of causal cognition, this focus remains on the lowest rung of Pearl’s Ladder of Causation ( Pearl and Mackenzie, 2018 ) and falls short of resembling human competences. Two of the core ingredients proposed by Lake et al. (2017) for making machines “learn and think like people” include an ability to build causal models and the grounding of learning in intuitive theories of physics and psychology (a kind of developmental “start-up software”). This emphasis on structure and content echoes insights from research on causal cognition in humans and non-human species ( Pearl, 2000 ; Waldmann et al., 2006 ) and would ensure that most of the shared components of causal cognition are accounted for. Still, for modeling (uniquely) human characteristics, a further step needs to be taken: the implementation of social learning and cultural accumulation of knowledge, possibly enriched by language use ( Dennett and Lambert, 2017 ; Tessler et al., 2017 ). Learning from others not only requires fewer data and occurs at a higher speed, but is also a key mechanism in diversification. As Clegg and Corriveau (2017) put it: even if the developmental “start-up software” is assumed to be universal, the “software updates” are likely shaped by culture and may over time generate distinct operating systems.

To sum up, gaining an understanding of cause-effect relations is an ability in which humans clearly and strikingly outperform any other species. To a great extent, this is due to the fact that in our species, individuals are just not reliant on drawing inferences from observed statistical regularities, each on their own, but are willing and able to share their observations, inferences, and interpretations, to accumulate them over time, and to transmit them to the next generation. The content, which is so crucial in human causal cognition, is a product of culture from the very beginning, rendered possible and profoundly shaped by the fact that humans are a cultural species ( Bender and Beller, 2019 ). While these characteristics of human causal cognition may not be considered relevant when transferring models from humans to machines—or not even desirable in some applications ( Livesey et al., 2017 )—it would at least be instructive to be aware of them.

Author Contributions

The author confirms being the sole contributor of this work and has approved it for publication.

Conflict of Interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

I would like to thank the fellows of the Research Group “The cultural constitution of causal cognition,” funded by the Center for Interdisciplinary Research (ZiF) at Bielefeld University, for stimulating discussion, and Sarah Mannion de Hernandez for proof-reading and valuable comments on an earlier draft. This article is based in large parts on collaborative work with my former colleague and partner, the late Sieghard Beller, who got me interested in causal cognition and to whom I remain indebted for inspiration and an exceptionally fruitful exchange of ideas.

Funding. This work was supported by the Research Council of Norway through the SFF Centre for Early Sapiens Behaviour (SapienCE), project number 262618. Publication fees are covered by the Library of the University of Bergen.

1 https://www.frontiersin.org/research-topics/9874/causal-cognition-in-humans-and-machines

  • Astuti R., Harris P. L. (2008). Understanding mortality and the life of the ancestors in rural Madagascar . Cogn. Sci. 32 , 713–740. 10.1080/03640210802066907, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bang M., Medin D. L., Atran S. (2007). Cultural mosaics and mental models of nature . Proc. Natl. Acad. Sci. USA 104 , 13868–13874. 10.1073/pnas.0706627104 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Beer B., Bender A. (2015). Causal inferences about others’ behavior among the Wampar, Papua New Guinea—and why they are hard to elicit . Front. Psychol. 6 :128, 1–14. 10.3389/fpsyg.2015.00128 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Beller S., Bender A., Song J. (2009). Weighing up physical causes: effects of culture, linguistic cues, and content . J. Cogn. Cult. 9 , 347–365. 10.1163/156770909x12518536414493 [ CrossRef ] [ Google Scholar ]
  • Bender A. (2019). The role of culture and evolution for human cognition . Top. Cogn. Sci. 10.1111/tops.12449, PMID: [online publication ahead of print]. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bender A., Beller S. (2011). Causal asymmetry across cultures: assigning causal roles in symmetric physical settings . Front. Psychol. Cult. Psychol. 2 :231. 10.3389/fpsyg.2011.00231 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bender A., Beller S. (2016). Probing the cultural constitution of causal cognition – a research program . Front. Psychol. 7 :245, 1–6. 10.3389/fpsyg.2016.00245 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bender A., Beller S. (2017). Agents and patients in physical settings: linguistic cues affect the assignment of causality in German and Tongan . Front. Psychol. Cogn. Sci. 8 :1093. 10.3389/fpsyg.2017.01093 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bender A., Beller S. (2019). The cultural fabric of human causal cognition . Perspect. Psychol. Sci. 14 , 922–940. 10.1177/1745691619863055, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bender A., Beller S., Medin D. L. (2017). “ Causal cognition and culture ” in The Oxford handbook of causal reasoning . ed. Waldmann M. R. (New York: Oxford University Press; ), 717–738. [ Google Scholar ]
  • Blaisdell A. P., Sawa K., Leising K. J., Waldmann M. R. (2006). Causal reasoning in rats . Science 311 , 1020–1022. 10.1126/science.1121872, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bohnemeyer J., Enfield N. J., Essegbey J., Kita S. (2010). “ The macro-event property: the segmentation of causal chains ” in Event representation in language . eds. Bohnemeyer J., Pederson E. (Cambridge: Cambridge University Press; ), 43–67. [ Google Scholar ]
  • Callanan M. A., Oakes L. M. (1992). Preschoolers’ questions and parents’ explanations: causal thinking in everyday activity . Cogn. Dev. 7 , 213–233. 10.1016/0885-2014(92)90012-G [ CrossRef ] [ Google Scholar ]
  • Carey S. (1996). “ Cognitive domains as modes of thought ” in Modes of thought: Explorations in culture and cognition . eds. Olson D., Torrance N. (Cambridge: Cambridge University Press; ), 187–215. [ Google Scholar ]
  • Carey S. (2009). The origin of concepts . Oxford: Oxford University Press. [ Google Scholar ]
  • Choi I., Dalal R., Kim-Prieto C., Park H. (2003). Culture and judgement of causal relevance . J. Pers. Soc. Psychol. 84 , 46–59. 10.1037/0022-3514.84.1.46, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Choi I., Nisbett R. E., Norenzayan A. (1999). Causal attribution across cultures: variation and universality . Psychol. Bull. 125 , 47–63. 10.1037/0033-2909.125.1.47 [ CrossRef ] [ Google Scholar ]
  • Clegg J. M., Corriveau K. H. (2017). Children begin with the same start-up software, but their software updates are cultural . Behav. Brain Sci. 40 :32. 10.1017/s0140525x17000097 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Csibra G., Gergely G. (2009). Natural pedagogy . Trends Cogn. Sci. 13 , 148–153. 10.1016/j.tics.2009.01.005 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • D’Andrade R. G. (1995). The development of cognitive anthropology . Cambridge: Cambridge University Press. [ Google Scholar ]
  • Dennett D. C., Lambert E. (2017). Thinking like animals or thinking like colleagues? Behav. Brain Sci. 40 , 34–35. 10.1017/s0140525x17000127 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Dickinson A., Balleine B. W. (2000). “ Causal cognition and goal-directed action ” in Vienna series in theoretical biology: The evolution of cognition . eds. Heyes C., Huber L. (Cambridge: The MIT Press; ), 185–204. [ Google Scholar ]
  • Dörner D. (1996). The logic of failure . New York: Basic Books. [ Google Scholar ]
  • Fausey C. M., Boroditsky L. (2010). Subtle linguistic cues influence perceived blame and financial liability . Psychon. Bull. Rev. 17 , 644–650. 10.3758/PBR.17.5.644, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fausey C., Long B., Inamori A., Boroditsky L. (2010). Constructing agency: the role of language . Front. Psychol. 1 :162. 10.3389/fpsyg.2010.00162, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Frazier B. N., Gelman S. A., Wellman H. M. (2009). Preschoolers’ search for explanatory information within adult–child conversation . Child Dev. 80 , 1592–1611. 10.1111/j.1467-8624.2009.01356.x, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gopnik A., Glymour C., Sobel D. M., Schulz L. E., Kushnir T., Danks D. (2004). A theory of causal learning in children: causal maps and Bayes nets . Psychol. Rev. 111 , 3–32. 10.1037/0033-295X.111.1.3, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gopnik A., Meltzoff A. N., Kuhl P. K. (1999). The scientist in the crib: Minds, brains, and how children learn . New York: William Morrow & Co. [ Google Scholar ]
  • Gopnik A., Sobel D. (2000). Detecting blickets: how young children use information about novel causal powers in categorization and induction . Child Dev. 71 , 1205–1222. 10.1111/1467-8624.00224, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Güss C. D., Robinson B. (2014). Predicted causality in decision making: the role of culture . Front. Psychol. 5 :479. 10.3389/fpsyg.2014.00479, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Iliev R., Axelrod R. (2016). Does causality matter more now? Increase in the proportion of causal language in English texts . Psychol. Sci. 27 , 635–643. 10.1177/0956797616630540, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Iliev R., ojalehto B. (2015). Bringing history back to culture: on the missing diachronic component in the research on culture and cognition . Front. Psychol. 6 :716. 10.3389/fpsyg.2015.00716, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kelley H. H. (1973). The processes of causal attribution . Am. Psychol. 28 , 107–128. 10.1037/h0034225 [ CrossRef ] [ Google Scholar ]
  • Kempton W. M. (1986). Two theories of home heat control . Cogn. Sci. 10 , 75–90. 10.1207/s15516709cog1001_3 [ CrossRef ] [ Google Scholar ]
  • Kuhnmünch G., Beller S. (2005). Distinguishing between causes and enabling conditions – through mental models or linguistic cues? Cogn. Sci. 29 , 1077–1090. 10.1207/s15516709cog0000_39 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kushnir T., Wellman H. M., Gelman S. A. (2008). The role of preschoolers’ social understanding in evaluating the informativeness of causal interventions . Cognition 107 , 1084–1092. 10.1016/j.cognition.2007.10.004, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lake B. M., Ullman T. D., Tenenbaum J. B., Gershman S. J. (2017). Building machines that learn and think like people . Behav. Brain Sci. 40 , 1–72. 10.1017/s0140525x16001837 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Legare C. H., Gelman S. A. (2008). Bewitchment, biology, or both: the co-existence of natural and super-natural explanatory frameworks across development . Cogn. Sci. 32 , 607–642. 10.1080/03640210802066766 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Leslie A. M. (1994). “ ToMM, ToBy, and agency: Core architecture and domain specificity ” in Mapping the mind: Domain specificity in cognition and culture . eds. Hirschfeld L. A., Gelman S. A. (Cambridge: Cambridge University Press; ), 118–148. [ Google Scholar ]
  • Livesey E. J., Goldwater M. B., Colagiuri B. (2017). Will human-like machines make human-like mistakes? Behav. Brain Sci. 40 :41. 10.1017/S0140525X1700019X [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Loftus E. F., Palmer J. C. (1974). Reconstruction of automobile destruction: an example of the interaction between language and memory . J. Verbal Learn. Verbal Behav. 13 , 585–589. 10.1016/S0022-5371(74)80011-3 [ CrossRef ] [ Google Scholar ]
  • Lombard M., Gärdenfors P. (2017). Tracking the evolution of causal cognition in humans . J. Anthropol. Sci. 95 , 1–16. 10.4436/JASS.95006 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lyons D. E., Young A. G., Keil F. C. (2007). The hidden structure of overimitation . Proc. Natl. Acad. Sci. USA 104 , 19751–19756. 10.1073/pnas.0704452104, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Maddux W. W., Yuki M. (2006). The “ripple effect”: cultural differences in perceptions of the consequences of events . Pers. Soc. Psychol. Bull. 32 , 669–683. 10.1177/0146167205283840 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • McGuigan N., Makinson J., Whiten A. (2011). From over-imitation to super-copying: adults imitate causally irrelevant aspects of tool use with higher fidelity than young children . Br. J. Psychol. 102 , 1–18. 10.1348/000712610X493115, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Medin D. L., Atran S. (2004). The native mind: biological categorization and reasoning in development and across cultures . Psychol. Rev. 111 , 960–983. 10.1037/0033-295x.111.4.960 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Michotte A. E. (1963). The perception of causality . London: Methuen; [Original published in 1946]. [ Google Scholar ]
  • Morris M. W., Peng K. (1994). Culture and cause: American and Chinese attributions for social and physical events . J. Pers. Soc. Psychol. 67 , 949–971. 10.1037/0022-3514.67.6.949 [ CrossRef ] [ Google Scholar ]
  • Muentener P., Bonawitz E. B. (2017). “ The development of causal reasoning ” in The Oxford handbook of causal reasoning . ed. Waldmann M. R. (New York: Oxford University Press; ), 677–698. [ Google Scholar ]
  • Norenzayan A., Nisbett R. E. (2000). Culture and causal cognition . Curr. Dir. Psychol. Sci. 9 , 132–135. 10.1111/1467-8721.00077 [ CrossRef ] [ Google Scholar ]
  • ojalehto B., Medin D. L., García S. G. (2017a). Conceptualizing agency: Folkpsychological and folkcommunicative perspectives on plants . Cognition 162 , 103–123. 10.1016/j.cognition.2017.01.023 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • ojalehto B., Medin D. L., García S. G. (2017b). Grounding principles for inferring agency: two cultural perspectives . Cogn. Psychol. 95 , 50–78. 10.1016/j.cogpsych.2017.04.001 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pearl J. (2000). Causality: Models, reasoning and inference . Cambridge: MIT Press. [ Google Scholar ]
  • Pearl J., Mackenzie D. (2018). The book of why: The new science of cause and effect . New York: Basic Books. [ Google Scholar ]
  • Penn D. C., Povinelli D. J. (2007). Causal cognition in human and nonhuman animals: a comparative, critical review . Annu. Rev. Psychol. 58 , 97–118. 10.1146/annurev.psych.58.110405.085555, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Saxe R., Carey S. (2006). The perception of causality in infancy . Acta Psychol. 123 , 144–165. 10.1016/j.actpsy.2006.05.005, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Shanks D. R., Holyoak K., Medin D. L. (Eds.) (1996). Causal learning . San Diego: Academic Press. [ Google Scholar ]
  • Spelke E. S., Kinzler K. D. (2007). Core knowledge . Dev. Sci. 10 , 89–96. 10.1111/j.1467-7687.2007.00569.x, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sperber D., Premack D., Premack A. J. (Eds.) (1995). Causal cognition: A multidisciplinary debate . Oxford: Clarendon Press. [ Google Scholar ]
  • Stuart-Fox M. (2015). The origins of causal cognition in early hominins . Biol. Philos. 30 , 247–266. 10.1007/s10539-014-9462-y [ CrossRef ] [ Google Scholar ]
  • Tessler M. H., Goodman N. D., Frank M. C. (2017). Avoiding frostbite: it helps to learn from others . Behav. Brain Sci. 40 , 48–49. 10.1017/s0140525x17000280 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Tomasello M., Carpenter M., Call J., Behne T., Moll H. (2005). Understanding and sharing intentions: the origins of cultural cognition . Behav. Brain Sci. 28 , 675–735. 10.1017/S0140525X05000129, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Waldmann M. R. (Ed.) (2017). The Oxford handbook of causal reasoning . New York: Oxford University Press. [ Google Scholar ]
  • Waldmann M. R., Hagmayer Y., Blaisdell A. P. (2006). Beyond the information given: causal models in learning and reasoning . Curr. Dir. Psychol. Sci. 15 , 307–311. 10.1111/j.1467-8721.2006.00458.x [ CrossRef ] [ Google Scholar ]
  • Wolff P., Jeon G. H., Li Y. (2009). Causers in English, Korean, and Chinese and the individuation of events . Lang. Cogn. 1 , 167–196. 10.1515/LANGCOG.2009.009 [ CrossRef ] [ Google Scholar ]
  • Zuberbühler K. (2000). Causal cognition in a non-human primate: field playback experiments with Diana monkeys . Cognition 76 , 195–207. 10.1016/S0010-0277(00)00079-2, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]

Introducing Causality in Psychology

  • First Online: 18 May 2016

Cite this chapter

Book cover

  • Gerald Young 2  

1790 Accesses

This chapter of the present book further elaborates the triadic axis model of causality in the study in psychology, as presented in Young ( Development and causality: Neo-Piagetian perspectives . New York: Springer Science + Business Media, 2011). Although, it describes the scope of the study of causality across multiple disciplines, it still considers the primary axes in this regard as being free will, mechanism, and causal graph modeling. In this chapter, I elaborate further on these three axes in the study of causality in psychology.

This chapter is especially based on the details of my approach to causality as described in Young ( Development and causality: Neo-Piagetian perspectives . New York: Springer Science + Business Media, 2011). I have taken the kernel arguments related to causality in that book and summarized them. Of note, I introduce the following concepts related to causality as central to psychology: the causal landscape and causal streams; hot vs. cold causality; and dimensions in causality study. Also in Young ( Development and causality: Neo-Piagetian perspectives . New York: Springer Science + Business Media, 2011), as summarized in this third chapter of the present book, I was developing models to help in the study of behavioral causality, such as one integrating the concepts of reaction range with the model of differential susceptibility. The chapter also presents other critical models that help in understanding better the causality of behavior. These include NLDST, a stage or step approach to both evolution and development, and the concept of activation–inhibition coordination.

The next part of the chapter gives new material about the three major axes in the study of behavioral causality. (a) Specifically, for the topic of free will, it presents more material on free being, which concerns: having a belief in free will and also having a sense of free will. (b) As for mechanism, I especially present work on energy dynamics as sources of causality throughout the universe and its evolution over time. Inevitably, this concept applies to psychology, as well, for example, in NLDST. (c) Finally, I give new material on explaining causal graph/network modeling. This relates to the work of Sloman ( Causal models: How people think about the world and its alternatives . New York: Oxford University Press, 2005), which I use to help structure a better understanding of behavioral causality.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Baumeister, R. F. (2008). Free will in scientific psychology. Perspectives on Psychological Science, 3 , 14–19.

Article   PubMed   Google Scholar  

Baumeister, R. F., Crescioni, A. W., & Alquist, J. L. (2011). Free will as advanced action control for human social life and culture. Neuroethics, 4 , 1–11.

Article   Google Scholar  

Beebee, H., Hitchcock, C., & Menzies, P. (2009). The Oxford handbook of causation . New York: Oxford University Press.

Book   Google Scholar  

Belsky, J., & Pluess, M. (2009a). Beyond diathesis stress: Differential susceptibility to environmental influences. Psychological Bulletin, 135 , 885–908.

Belsky, J., & Pluess, M. (2009b). The nature (and nurture?) of plasticity in early human development. Perspectives on Psychological Science, 4 , 345–351.

Berger, A. (2011). Self-regulation: Brain, cognition, and development . Washington: American Psychological Association.

Borsboom, D., & Cramer, A. O. J. (2014). Network analysis: An integrative approach to the structure of psychopathology. Annual Review of Clinical Psychology, 9 , 91–121.

Carlson, S. M. (2010). Development of conscious control and imagination. In R. F. Baumeister, A. R. Mele, & K. D. Vohs (Eds.), Free will and consciousness: How might they work? (pp. 135–152). New York: Oxford University Press.

Chapter   Google Scholar  

Carver, C. S., Johnson, S. L., Joormann, J., Kim, Y., & Nam, J. Y. (2011). Serotonin transporter polymorphism interacts with childhood adversity to predict aspects of impulsivity. Psychological Science, 22 , 589–595.

Caspi, A., Hariri, A. R., Holmes, A., Uher, R., & Moffitt, T. E. (2010). Genetic sensitivity to the environment: The case of the serotonin transporter gene and its implications for studying complex diseases and traits. American Journal of Psychiatry, 167 , 509–527.

Article   PubMed   PubMed Central   Google Scholar  

Caspi, A., McClay, J., Moffitt, T. E., Mill, J., Martin, J., Craig, I. W., Taylor, A., & Poulton, R. (2002). Role of genotype in the cycle of violence in maltreated children. Science, 297 , 851–854.

Google Scholar  

Caspi, A., Sugden, K., Moffitt, T. E., Taylor, A., Craig, I. W., Harrington, H., et al. (2003). Influence of life stress on depression: Moderation by a polymorphism in the 5-HTT gene. Science, 301 , 386–389.

Chaisson, E. J. (2010). Energy rate density as a complexity metric and evolutionary driver. Complexity, 16 , 27–40.

Chaisson, E. J. (2011). Energy rate density. II. Probing further a new complexity metric. Complexity, 17 , 44–63.

Cocchiarella, L., & Lord, S. J. (Eds.). (2001). Master the AMA guides fifth (5th ed., pp. 327–341). Chicago: American Medical Association.

Danks, D. (2014). Unifying the mind: Cognitive representations as graphical models . Cambridge, MA: The MIT Press.

Dick, D. M. (2011). Gene-environment interaction in psychological traits and disorders. Annual Review of Clinical Psychology, 7 , 383–409.

Dick, D. M., Meyers, J. L., Latendresse, S. J., Creemers, H. E., Lansford, J. E., Pettit, G. S., et al. (2011). CHRM2 , parental monitoring, and adolescent externalizing behavior: Evidence for gene-environment interaction. Psychological Science, 22 , 481–489.

Douglas, K. S., Huss, M. T., Murdoch, L. L., Washington, D. O., & Koch, W. J. (1999). Posttraumatic stress disorder stemming from motor vehicle accidents: Legal issues in Canada and the United States. In E. J. Hickling & E. B. Blanchard (Eds.), The international handbook of road traffic accidents and psychological trauma: Current understanding, treatment and law (pp. 271–289). New York: Elsevier.

Ellis, B. J., & Boyce, W. T. (2008). Biological sensitivity to context. Current Directions in Psychological Science, 17 , 183–187.

Garner, B. A. (Ed.). (2009). Black’s law dictionary (9th ed.). St. Paul, MN: West.

Gopnik, A., & Schulz, L. (2007). Causal learning: Psychology, philosophy, and computation . New York: Oxford University Press.

Haynes, S. N., O’Brien, W. H., & Kaholokula, J. K. (2011). Behavioral assessment and case formulation . Hoboken, NJ: Wiley.

Heidegger, M. (1927/1962). Being in time . London, UK: SCM.

Illari, P. M., Russo, F., & Williamson, J. (Eds.). (2011a). Causality in the sciences . New York: Oxford University Press.

Illari, P. M., Russo, F., & Williamson, J. (2011b). Why look at causality in the sciences? A manifesto. In P. M. Illari, F. Russo, & J. Williamson (Eds.), Causality in the sciences (pp. 3–22). New York: Oxford University Press.

Karg, K., Shedden, K., Burmeister, M., & Sen, S. (2011). The serotonin transporter promoter variant ( 5-HTTLPR ), stress, and depression meta-analysis revisited: Evidence of genetic moderation. Archives of General Psychiatry, 68 , 444–454.

Kauffman, S. (1993). The origins of order: Self-organization and selection in evolution . New York: Oxford University Press.

Layne, C. M., Steinberg, J. R., & Steinberg, A. M. (2014). Causal reasoning skills training for mental health practitioners: Promoting sound clinical judgment in evidence-based practice. Training and Education in Professional Psychology . doi: 10.1037/tep0000037 .

Markus, K. A. (2011). Real causes and ideal manipulations: Pearl’s theory of causal inference from the point of view of psychological research methods. In P. M. Illari, F. Russo, & J. Williamson (Eds.), Causality in the sciences (pp. 240–269). New York: Oxford University Press.

McNally, R. J., Robinaugh, D. J., Wu, G. W. Y., Wang, L., Deserno, M. K., & Borsboom, D. (2015). Mental disorders as causal systems: A network approach to posttraumatic stress disorder. Clinical Psychological Science 3 , 836–849.

Nowak, M. A., Tarnita, C. E., & Wilson, E. O. (2010). The evolution of eusociality. Nature, 466 , 1057–1062.

Pearl, J. (2000). Causality: Models, reasoning, and inference . New York: Cambridge University Press.

Pearl, J. (2009). Causality: Models, reasoning, and inference (2nd ed.). New York: Cambridge University Press.

Russo, F. (2009). Causality and causal modeling in the social sciences: Measuring variations . New York: Springer Science + Business Media.

Shrout, P. E., Keyes, K. M., & Ornstein, K. (Eds.). (2011). Causality and psychopathology: Finding the determinants of disorders and their cures . New York: Oxford University Press.

Simpson, J. A., & Belsky, J. (2008). Attachment theory within a modern evolutionary framework. In J. Cassidy & P. R. Shaver (Eds.), Handbook of attachment: Theory, research, and clinical applications (pp. 131–157). New York: Guilford Press.

Sloman, S. (2005). Causal models: How people think about the world and its alternatives . New York: Oxford University Press.

Spencer, J. P., Austin, A., & Schutte, A. R. (2012). Contributions of dynamic systems theory to cognitive development. Cognitive Development, 27 , 401–418.

Spirtes, P., Glymour, C., & Scheines, R. (2001). Causation, prediction, and search . Cambridge, MA: MIT Press.

Sporns, O. (2011). Networks of the brain . Cambridge, MA: MIT Press.

Sporns, O. (2012). Discovering the human connectome . Cambridge, MA: MIT Press.

Thelen, E., & Smith, L. B. (2006). Dynamic systems theories. In W. Damon & R. M. Lerner (Eds.), Handbook of child psychology: Vol. 1. Theoretical models of human development (6th ed., pp. 258–312). Hoboken, NJ: Wiley.

Tinbergen, N. (1963). On aims and methods in ethology. Zeitschrift für Tierpsychologie, 20 , 410–433.

Vauclair, J., & Imbault, J. (2009). Relationship between manual preferences for object manipulation and pointing gestures for infants and toddlers. Developmental Science, 12 , 1060–1069.

Woodward, J. (2007). Interventionist theories of causation in psychological perspective. In A. Gopnik & L. Schulz (Eds.), Causal learning: Psychology, philosophy, and computation (pp. 19–36). New York: Oxford University Press.

Young, G. (2008). Causality and causation in law, medicine, psychiatry, and psychology: Progression or regression? Psychological Injury and Law, 1 , 161–181.

Young, G. (2011). Development and causality: Neo-Piagetian perspectives . New York: Springer Science + Business Media.

Young, G. (2014). Malingering, feigning, and response bias in psychiatric/psychological injury: Implications for practice and court . Dordrecht, Netherlands: Springer Science + Business Media.

Young, G. (2015). Causality in civil disability and criminal forensic cases: Legal and psychological comparison. International Journal of Law and Psychiatry, 37 .

Young, G., & Gagnon, M. (1990). Neonatal laterality, birth stress, familial sinistrality, and left brain inhibition. Developmental Neuropsychology, 6 , 127–150.

Young, G., Kane, A. W., & Nicholson, K. (2007). Causality of psychological injury: Presenting evidence in court . New York: Springer Science + Business Media.

Young, G., & Shore, R. (2007). Dictionary of terms related to causality, causation, law, and psychology. In G. Young, A. W. Kane, & K. Nicholson (Eds.), Causality of psychological injury: Presenting evidence in court (pp. 87–135). New York: Springer Science + Business Media.

Zelazo, P. D. (2004). The development of conscious control in childhood. Trends in Cognitive Sciences, 8 , 12–17.

Zelazo, P. D., Carlson, S. M., & Kesek, A. (2008). The development of executive function in childhood. In C. Nelson & M. Luciana (Eds.), Handbook of developmental cognitive neuroscience (2nd ed., pp. 553–574). Cambridge, MA: MIT Press.

Download references

Author information

Authors and affiliations.

Toronto, ON, Canada

Gerald Young

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Young, G. (2016). Introducing Causality in Psychology. In: Unifying Causality and Psychology. Springer, Cham. https://doi.org/10.1007/978-3-319-24094-7_3

Download citation

DOI : https://doi.org/10.1007/978-3-319-24094-7_3

Published : 18 May 2016

Publisher Name : Springer, Cham

Print ISBN : 978-3-319-24092-3

Online ISBN : 978-3-319-24094-7

eBook Packages : Behavioral Science and Psychology Behavioral Science and Psychology (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Logo for Kwantlen Polytechnic University

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Overview of the Scientific Method

10 Developing a Hypothesis

Learning objectives.

  • Distinguish between a theory and a hypothesis.
  • Discover how theories are used to generate hypotheses and how the results of studies can be used to further inform theories.
  • Understand the characteristics of a good hypothesis.

Theories and Hypotheses

Before describing how to develop a hypothesis, it is important to distinguish between a theory and a hypothesis. A  theory  is a coherent explanation or interpretation of one or more phenomena. Although theories can take a variety of forms, one thing they have in common is that they go beyond the phenomena they explain by including variables, structures, processes, functions, or organizing principles that have not been observed directly. Consider, for example, Zajonc’s theory of social facilitation and social inhibition (1965) [1] . He proposed that being watched by others while performing a task creates a general state of physiological arousal, which increases the likelihood of the dominant (most likely) response. So for highly practiced tasks, being watched increases the tendency to make correct responses, but for relatively unpracticed tasks, being watched increases the tendency to make incorrect responses. Notice that this theory—which has come to be called drive theory—provides an explanation of both social facilitation and social inhibition that goes beyond the phenomena themselves by including concepts such as “arousal” and “dominant response,” along with processes such as the effect of arousal on the dominant response.

Outside of science, referring to an idea as a theory often implies that it is untested—perhaps no more than a wild guess. In science, however, the term theory has no such implication. A theory is simply an explanation or interpretation of a set of phenomena. It can be untested, but it can also be extensively tested, well supported, and accepted as an accurate description of the world by the scientific community. The theory of evolution by natural selection, for example, is a theory because it is an explanation of the diversity of life on earth—not because it is untested or unsupported by scientific research. On the contrary, the evidence for this theory is overwhelmingly positive and nearly all scientists accept its basic assumptions as accurate. Similarly, the “germ theory” of disease is a theory because it is an explanation of the origin of various diseases, not because there is any doubt that many diseases are caused by microorganisms that infect the body.

A  hypothesis , on the other hand, is a specific prediction about a new phenomenon that should be observed if a particular theory is accurate. It is an explanation that relies on just a few key concepts. Hypotheses are often specific predictions about what will happen in a particular study. They are developed by considering existing evidence and using reasoning to infer what will happen in the specific context of interest. Hypotheses are often but not always derived from theories. So a hypothesis is often a prediction based on a theory but some hypotheses are a-theoretical and only after a set of observations have been made, is a theory developed. This is because theories are broad in nature and they explain larger bodies of data. So if our research question is really original then we may need to collect some data and make some observations before we can develop a broader theory.

Theories and hypotheses always have this  if-then  relationship. “ If   drive theory is correct,  then  cockroaches should run through a straight runway faster, and a branching runway more slowly, when other cockroaches are present.” Although hypotheses are usually expressed as statements, they can always be rephrased as questions. “Do cockroaches run through a straight runway faster when other cockroaches are present?” Thus deriving hypotheses from theories is an excellent way of generating interesting research questions.

But how do researchers derive hypotheses from theories? One way is to generate a research question using the techniques discussed in this chapter  and then ask whether any theory implies an answer to that question. For example, you might wonder whether expressive writing about positive experiences improves health as much as expressive writing about traumatic experiences. Although this  question  is an interesting one  on its own, you might then ask whether the habituation theory—the idea that expressive writing causes people to habituate to negative thoughts and feelings—implies an answer. In this case, it seems clear that if the habituation theory is correct, then expressive writing about positive experiences should not be effective because it would not cause people to habituate to negative thoughts and feelings. A second way to derive hypotheses from theories is to focus on some component of the theory that has not yet been directly observed. For example, a researcher could focus on the process of habituation—perhaps hypothesizing that people should show fewer signs of emotional distress with each new writing session.

Among the very best hypotheses are those that distinguish between competing theories. For example, Norbert Schwarz and his colleagues considered two theories of how people make judgments about themselves, such as how assertive they are (Schwarz et al., 1991) [2] . Both theories held that such judgments are based on relevant examples that people bring to mind. However, one theory was that people base their judgments on the  number  of examples they bring to mind and the other was that people base their judgments on how  easily  they bring those examples to mind. To test these theories, the researchers asked people to recall either six times when they were assertive (which is easy for most people) or 12 times (which is difficult for most people). Then they asked them to judge their own assertiveness. Note that the number-of-examples theory implies that people who recalled 12 examples should judge themselves to be more assertive because they recalled more examples, but the ease-of-examples theory implies that participants who recalled six examples should judge themselves as more assertive because recalling the examples was easier. Thus the two theories made opposite predictions so that only one of the predictions could be confirmed. The surprising result was that participants who recalled fewer examples judged themselves to be more assertive—providing particularly convincing evidence in favor of the ease-of-retrieval theory over the number-of-examples theory.

Theory Testing

The primary way that scientific researchers use theories is sometimes called the hypothetico-deductive method  (although this term is much more likely to be used by philosophers of science than by scientists themselves). Researchers begin with a set of phenomena and either construct a theory to explain or interpret them or choose an existing theory to work with. They then make a prediction about some new phenomenon that should be observed if the theory is correct. Again, this prediction is called a hypothesis. The researchers then conduct an empirical study to test the hypothesis. Finally, they reevaluate the theory in light of the new results and revise it if necessary. This process is usually conceptualized as a cycle because the researchers can then derive a new hypothesis from the revised theory, conduct a new empirical study to test the hypothesis, and so on. As  Figure 2.3  shows, this approach meshes nicely with the model of scientific research in psychology presented earlier in the textbook—creating a more detailed model of “theoretically motivated” or “theory-driven” research.

define causal hypothesis in psychology

As an example, let us consider Zajonc’s research on social facilitation and inhibition. He started with a somewhat contradictory pattern of results from the research literature. He then constructed his drive theory, according to which being watched by others while performing a task causes physiological arousal, which increases an organism’s tendency to make the dominant response. This theory predicts social facilitation for well-learned tasks and social inhibition for poorly learned tasks. He now had a theory that organized previous results in a meaningful way—but he still needed to test it. He hypothesized that if his theory was correct, he should observe that the presence of others improves performance in a simple laboratory task but inhibits performance in a difficult version of the very same laboratory task. To test this hypothesis, one of the studies he conducted used cockroaches as subjects (Zajonc, Heingartner, & Herman, 1969) [3] . The cockroaches ran either down a straight runway (an easy task for a cockroach) or through a cross-shaped maze (a difficult task for a cockroach) to escape into a dark chamber when a light was shined on them. They did this either while alone or in the presence of other cockroaches in clear plastic “audience boxes.” Zajonc found that cockroaches in the straight runway reached their goal more quickly in the presence of other cockroaches, but cockroaches in the cross-shaped maze reached their goal more slowly when they were in the presence of other cockroaches. Thus he confirmed his hypothesis and provided support for his drive theory. (Zajonc also showed that drive theory existed in humans [Zajonc & Sales, 1966] [4] in many other studies afterward).

Incorporating Theory into Your Research

When you write your research report or plan your presentation, be aware that there are two basic ways that researchers usually include theory. The first is to raise a research question, answer that question by conducting a new study, and then offer one or more theories (usually more) to explain or interpret the results. This format works well for applied research questions and for research questions that existing theories do not address. The second way is to describe one or more existing theories, derive a hypothesis from one of those theories, test the hypothesis in a new study, and finally reevaluate the theory. This format works well when there is an existing theory that addresses the research question—especially if the resulting hypothesis is surprising or conflicts with a hypothesis derived from a different theory.

To use theories in your research will not only give you guidance in coming up with experiment ideas and possible projects, but it lends legitimacy to your work. Psychologists have been interested in a variety of human behaviors and have developed many theories along the way. Using established theories will help you break new ground as a researcher, not limit you from developing your own ideas.

Characteristics of a Good Hypothesis

There are three general characteristics of a good hypothesis. First, a good hypothesis must be testable and falsifiable . We must be able to test the hypothesis using the methods of science and if you’ll recall Popper’s falsifiability criterion, it must be possible to gather evidence that will disconfirm the hypothesis if it is indeed false. Second, a good hypothesis must be logical. As described above, hypotheses are more than just a random guess. Hypotheses should be informed by previous theories or observations and logical reasoning. Typically, we begin with a broad and general theory and use  deductive reasoning to generate a more specific hypothesis to test based on that theory. Occasionally, however, when there is no theory to inform our hypothesis, we use  inductive reasoning  which involves using specific observations or research findings to form a more general hypothesis. Finally, the hypothesis should be positive. That is, the hypothesis should make a positive statement about the existence of a relationship or effect, rather than a statement that a relationship or effect does not exist. As scientists, we don’t set out to show that relationships do not exist or that effects do not occur so our hypotheses should not be worded in a way to suggest that an effect or relationship does not exist. The nature of science is to assume that something does not exist and then seek to find evidence to prove this wrong, to show that it really does exist. That may seem backward to you but that is the nature of the scientific method. The underlying reason for this is beyond the scope of this chapter but it has to do with statistical theory.

  • Zajonc, R. B. (1965). Social facilitation.  Science, 149 , 269–274 ↵
  • Schwarz, N., Bless, H., Strack, F., Klumpp, G., Rittenauer-Schatka, H., & Simons, A. (1991). Ease of retrieval as information: Another look at the availability heuristic.  Journal of Personality and Social Psychology, 61 , 195–202. ↵
  • Zajonc, R. B., Heingartner, A., & Herman, E. M. (1969). Social enhancement and impairment of performance in the cockroach.  Journal of Personality and Social Psychology, 13 , 83–92. ↵
  • Zajonc, R.B. & Sales, S.M. (1966). Social facilitation of dominant and subordinate responses. Journal of Experimental Social Psychology, 2 , 160-168. ↵

A coherent explanation or interpretation of one or more phenomena.

A specific prediction about a new phenomenon that should be observed if a particular theory is accurate.

A cyclical process of theory development, starting with an observed phenomenon, then developing or using a theory to make a specific prediction of what should happen if that theory is correct, testing that prediction, refining the theory in light of the findings, and using that refined theory to develop new hypotheses, and so on.

The ability to test the hypothesis using the methods of science and the possibility to gather evidence that will disconfirm the hypothesis if it is indeed false.

Research Methods in Psychology Copyright © 2019 by Rajiv S. Jhangiani, I-Chant A. Chiang, Carrie Cuttler, & Dana C. Leighton is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Causal Theories of Mental Content

Causal theories of mental content attempt to explain how thoughts can be about things. They attempt to explain how one can think about, for example, dogs. These theories begin with the idea that there are mental representations and that thoughts are meaningful in virtue of a causal connection between a mental representation and some part of the world that is represented. In other words, the point of departure for these theories is that thoughts of dogs are about dogs because dogs cause the mental representations of dogs.

1. Introduction

2. some historical and theoretical context, 3.1 normal conditions, 3.2 evolutionary functions, 3.3 developmental functions, 3.4 asymmetric dependency theory, 3.5 best test theory, 4.1 causal theories do not work for logical and mathematical relations, 4.2 causal theories do not work for vacuous terms, 4.3 causal theories do not work for phenomenal intentionality, 4.4 causal theories do not work for certain reflexive thoughts.

  • 4.5 Causal Theories do not Work for Reliable Misrepresentations

4.6 Causal Theories Conflict with the Theory Mediation of Perception

4.7 causal theories conflict with the implementation of psychological laws, 4.8 causal theories do not provide a metasemantic theory, 5. concluding remarks, other internet resources, related entries.

Content is what is said, asserted, thought, believed, desired, hoped for, etc. Mental content is the content had by mental states and processes. Causal theories of mental content attempt to explain what gives thoughts, beliefs, desires, and so forth their contents. They attempt to explain how thoughts can be about things. [ 1 ]

Although one might find precursors to causal theories of mental content scattered throughout the history of philosophy, the current interest in the topic was spurred, in part, by perceived inadequacies in “similarity” or “picture” theories of mental representation. Where meaning and representation are asymmetric relations—that is, a syntactic item “X” might mean or represent X, but X does not (typically) mean or represent “X”—similarity and resemblance are symmetric relations. Dennis Stampe (1977), who played an important role in initiating contemporary interest in causal theories, drew attention to related problems. Consider a photograph of one of two identical twins. What makes it a photo of Judy, rather than her identical twin Trudy? By assumption, it cannot be the similarity of the photo to one twin rather than the other, since the twins are identical. Moreover, one can have a photo of Judy even though the photo happens not to look very much like her at all. What apparently makes a photo of Judy a photo of Judy is that she was causally implicated, in the right way, in the production of the photo. Reinforcing the hunch that causation could be relevant to meaning and representation is the observation that there is a sense in which the number of rings in a tree stump represents the age of the tree when it died and that the presence of smoke means fire. The history of contemporary developments of causal theories of mental content consists largely of specifying what it is for something to be causally implicated in the right way in the production of meaning and refining the sense in which smoke represents fire to the sense in which a person’s thoughts, sometimes at least, represent the world.

If one wanted to trace a simple historical arc for recent causal theories, one would have to begin with the seminal 1977 paper by Dennis Stampe, “Toward a Causal Theory of Linguistic Representation.” Among the many important features of this paper is its having set much of the conceptual and theoretical stage to be described in greater detail below. It drew a contrast between causal theories and “picture theories” that try to explain representational content by appeal to some form of similarity between a representation and the thing represented. It also drew attention to the problem of distinguishing the content determining causes of a representation from adventitious non-content determining causes. So, for example, one will want “X” to mean dogs because dogs causes dogs, but one does not want “X” to mean blow-to-the-head, even though blows to the head might cause the occurrence of an “X”. (Much more of this will be described below.) Finally, it also provided some attempts to address this problem, such as an appeal to the function a thing might have.

Fred Dretske’s 1981 Knowledge and the Flow of Information offered a much expanded treatment of a type of causal theory. Rather than basing semantic content on a causal connection per se , Dretske began with a type of informational connection derived from the mathematical theory of information. This has led some to refer to Dretske’s theory as “information semantics”. Dretske also appealed to the notion of function in an attempt to distinguish content determining causes from adventitious non-content determining causes. This has led some to refer to Dretske’s theory as a “teleoinformational” theory or a “teleosemantic” theory. Dretske’s 1988 book, Explaining Behavior , further refined his earlier treatment.

Jerry Fodor’s 1984 “Semantics, Wisconsin Style” gave the problem of distinguishing content-determining causes from non-content determining causes its best-known guise as “the disjunction problem”. How can a causal theory of content say that “X” has the non-disjunctive content dog, rather than the disjunctive content dog-or-blow-to-the-head, when both dogs and blows to the head cause instances of “X”? By 1987, in Psychosemantics , Fodor published his first attempt at an alternative method of solving the disjunction problem, the Asymmetric (Causal) Dependency Theory. This theory was further refined for the title essay in Fodor’s 1990 book A Theory of Content and Other Essays .

Although these causal theories have subsequently spawned a significant critical literature, other related causal theories have also been advanced. Two of these are teleosemantic theories that are sometimes contrasted with causal theories. (Cf., e.g., Papineau (1984), Millikan (1989), and the entry on teleological theories of mental content .) Other more purely causal theories are Dan Lloyd’s (1987, 1989) Dialectical Theory of Representation, Robert Rupert’s (1999) Best Test Theory (see section 3.5 below), Marius Usher’s (2001) Statistical Referential Theory, and Dan Ryder’s (2004) SINBAD neurosemantics.

Causal theories of mental content are typically developed in the context of four principal assumptions. First, they typically presuppose that there is a difference between derived and underived meaning. [ 2 ] Normal humans can use one thing, such as “%”, to mean percent. They can use certain large red octagons to mean that one is to stop at an intersection. In such cases, there are collective arrangements that confer relatively specific meanings on relatively specific objects. In the case of human minds, however, it is proposed that thoughts can have the meanings or contents they do without recourse to collective arrangements. It is possible to think about percentage or ways of negotiating intersections prior to collective social arrangements. It, therefore, appears that the contents of our thoughts do not acquire the content they do in the way that “%” and certain large red octagons do. Causal theories of mental content presuppose that mental contents are underived, hence attempt to explain how underived meaning arises.

Second, causal theories of mental content distinguish what has come to be known as natural meaning and non-natural meaning . [ 3 ] Cases where an object or event X has natural meaning are those in which, given certain background conditions, the existence or occurrence of X “entails” the existence or occurrence of some state of affairs. If smoke in the unspoiled forest naturally means fire then, given the presence of smoke, there was fire. Under the relevant background conditions, the effect indicates or naturally means the cause. An important feature of natural meaning is that it does not generate falsity. If smoke naturally means fire, then there must really be a fire. By contrast, many non-naturally meaningful things can be false. Sentences, for example, can be meaningful and false. The utterance “Colleen currently has measles” means that Colleen currently has measles but does not entail that Colleen currently has measles in the way that Colleen’s spots do entail that she has measles. Like sentences, thoughts are also meaningful, but often false. Thus, it is generally supposed that mental content must be a form of non-natural underived meaning. [ 4 ]

Third, these theories assume that it is possible to explain the origin of underived content without appeal to other semantic or contentful notions. So, it is assumed that there is more to the project than simply saying that one’s thoughts mean that Colleen currently has the measles because one’s thoughts are about Colleen currently having the measles. Explicating meaning in terms of aboutness, or aboutness in terms of meaning, or either in terms of some still further semantic notion, does not go as far as is commonly desired by those who develop causal theories of mental content. To note some additional terminology, it is often said that causal theories of mental content attempt to naturalize non-natural, underived meaning. To put the matter less technically, one might say that causal theories of mental content presuppose that it is possible for a purely physical system to bear underived content. Thus, they presuppose that if one were to build a genuinely thinking robot or computer, one would have to design it in such a way that some of its internal components would bear non-natural, underived content in virtue of purely physical conditions. To get a feel for the difference between a naturalized theory and an unnaturalized theory of content, one might note the theory developed by Grice (1948). Grice developed an unnaturalized theory. Speaking of linguistic items, Grice held that ‘Speaker S non-naturally means something by “X”’ is roughly equivalent to ‘S intended the utterance of “X” to produce some effect in an audience by means of the recognition of this intention.’ Grice did not explicate the origin of mental content of speaker’s intentions or audience recognition, hence he did not attempt to naturalize the meaning of linguistic items.

Fourth, it is commonly presupposed that naturalistic analyses of non-natural, underived meanings will apply, in the first instance, to the contents of thought. The physical items “X” that are supposed to be bearers of causally determined content will, therefore, be something like the firings of a particular neuron or set of neurons. These contents of thoughts are said to be captured in what is sometimes called a “language of thought” or “mentalese.” The contents of items in natural languages, such as English, Japanese, and French, will then be given a separate analysis, presumably in terms of a naturalistic account of non-natural derived meanings. It is, of course, possible to suppose that it is natural language, or some other system of communication, that first develops content, which can then serve as a basis upon which to provide an account of mental content. Among the reasons that threaten this order of dependency is the fact that cognitive agents appear to have evolved before systems of communication. Another reason is that human infants at least appear to have some sophisticated cognitive capacities involving mental representation, before they speak or understand natural languages. Yet another reason is that, although some social animals may have systems of communication complex enough to support the genesis of mental content, other non-social cognizing animals may not.

It is worth noting that, in recent years, this last presupposition has sometimes been abandoned by philosophers attempting to understand animal signaling or animal communication, as when toads emit mating calls or vervet monkeys cry out when seeing a cheetah, eagle, or snake. See, for example, Stegmann, 2005, 2009, Skyrms, 2008, 2010a, b, 2012, and Birch, 2014. In other words, there have been efforts to use the sorts of apparatus originally developed for theories of mental content, plus or minus a bit, as apparatus for handling animal signaling. These approaches seem to allow that there are mental representations in the brains of the signaling/communicating animals, but do not reply on the content of the mental representations to provide the representational contents of the signals. In this way, the contents of the signals are not derived from the contents of the mental representations.

3. Specific Causal Theories of Mental Content

The unifying inspiration for causal theories of mental content is that some syntactic item “X” means X because “X”s are caused by Xs. [ 5 ] Matters cannot be this simple, however, since in general one expects that some causes of “X” are not among the content-specifying causes of “X”s. There are numerous examples illustrating this point, each illustrating a kind of cause that must not typically be among the content-determining causes of “X”:

  • Suppose there is some syntactic item “X” that is a putative mental representation of a dog. Dogs will presumably cause tokens of “X”, but so might foxes at odd angles, with some obstructions, at a distance, or under poor lighting conditions. The causal theorist will need some principle that allows her to say that the causal links between dogs and “X”s will be content-determining, where the causal links between, say, foxes and “X”s will not. Mice and shrews, mules and donkeys, German Shepherds and wolves, dogs and paper mâché dogs, dogs and stuffed dogs, and any number of confusable groups would do to make this point.
  • A syntactic item “X” with the putative content of dog might also be caused by a dose of LSD, a set of strategically placed and activated microelectrodes, a brain tumor, or quantum mechanical fluctuations. Who knows what mental representations might be triggered by these things? LSD, microelectrodes, etc., should (typically) not be among the content-determining causes of most mental representations.
  • Upon hearing the question “What kind of animal is named ‘Fido’?” a person might token the syntactic item “X”. One will want at least some cases in which this “X” means dog, but to get this result the causal theorist will not want the question to be among the content-determining causes of “X”.
  • In seeing a dog, there is a causal pathway between the dog through the visual system (and perhaps beyond) to a token of “X”. What in this causal pathway from the dog to “X” constitutes the content-determining element? In virtue of what is it the case that “X” means dog, rather than retinal projection of a dog, or any number of other possible points along the pathway? Clearly there is a similar problem for other sense modalities. In hearing a dog, there is a causal pathway between the dog through the auditory system (and perhaps beyond) to a token of “X”. What makes “X” mean dog, rather than sound of a dog (barking?) or eardrum vibration or motion in the stapes bone of the inner ear? One might press essentially the same point by asking what makes “X” mean dog, rather than some complex function of all the diverse causal intermediaries between dogs and “X”.

The foregoing problem cases are generally developed under the rubric of “false beliefs” or “the disjunction problem” in the following way and can be traced to Fodor (1984). No one is perfect, so a theory of content should be able to explicate what is going on when a person makes a mistake, such as mistaking a fox for a dog. The first thought is that this happens when a fox (at a distance or in poor lighting conditions) causes the occurrence of a token of “X” and, since “X” means dog, one has mistaken a fox for a dog. The problem with this first thought arises with the invocation of the idea that “X” means dog. Why say that “X” means dog, rather than dog or fox? On a causal account, we need some principled reason to say that the content of “X” is dog, hence that the token of “X” is falsely tokened by the fox, rather than the content of “X” is dog or fox, hence that the token of “X” is truly tokened by the fox. What basis is there for saying that “X” means dog, rather than dog or fox? Because there appears always to be this option of making the content of a term some disjunction of items, the problem has been called “the disjunction problem”. [ 6 ]

As was noted above, what unifies causal theories of mental content is some version of the idea that “X”s being causally connected to Xs makes “X”s mean Xs. What divides causal theories of mental content, most notably, is the different approaches they take to separating the content-determining causes from the non-content-determining causes. Some of these different theories appeal to normal conditions, others to functions generated by natural selection, others to functions acquired ontogenetically, and still others to dependencies among laws. At present there is no approach that is commonly agreed to correctly separate the content-determining causes from the non-content determining causes while at the same time respecting the need not to invoke existing semantic concepts. Although each attempt may have technical problems of its own, the recurring problem is that the attempts to separate content-determining from non-content-determining causes threaten to smuggle in semantic elements.

In this section, we will review the internal problematic of causal theories by examining how each theory fares on our battery of test cases (I)–(IV), along with other objections from time to time. This provides a simple, readily understood organization of the project of developing a causal theory of mental content, but it does this at a price. The primary literature is not arranged exactly in this way. The positive theories found in the primary literature are typically more nuanced than what we present here. Moreover, the criticisms are not arranged into the kind of test battery we have with cases (I)–(IV). One paper might bring forward cases (I) and (III) against theory A, where another paper might bring forward cases (I) and (II) against theory B. Nor are the examples in our test battery exactly the ones developed in the primary literature. In other words, the price one pays for this simplicity of organization is that we have something less like a literature review and more like a theoretical and conceptual toolbox for understanding causal theories.

Trees usually grow a certain way. Each year, there is the passage of the four seasons with a tree growing more quickly at some times and more slowly at others. As a result, each year a tree adds a “ring” to its girth in such a way that one might say that each ring means a year of growth. If we find a tree stump that has twelve rings, then that means that the tree was twelve years old when it died. But, it is not an entirely inviolable law that a tree grows a ring each year. Such a law, if it is one, is at most a ceteris paribus law. It holds only given certain background conditions, such as that weather conditions are normal. If the weather conditions are especially bad one season, then perhaps the tree will not grow enough to produce a new ring. One might, therefore, propose that if conditions are normal, then n rings means that the tree was n years old when it died. This idea makes its first appearance when Stampe (1977) invokes it as part of his theory of “fidelity conditions.”

An appeal to normal conditions would seem to be an obvious way in which to bracket at least some non-content-determining causes of a would-be mental representation “X”. It is only the causes that operate under normal conditions that are content-determining. So, when it comes to human brains, under normal conditions one is not under the influence of hallucinogens nor is one’s head being invaded by an elaborate configuration of microelectrodes. So, even though LSD and microelectrodes would, counterfactually speaking, cause a token neural event “X”, these causes would not be among the content-determining causes of “X”. Moreover, one can take normal conditions of viewing to include good lighting, a particular perspective, a particular viewing distance, a lack of (seriously) occluding objects, and so forth, so that foxes in dim light, viewed from the bottom up, at a remove of a mile, or through a dense fog, would not be among the content-determining causes of “X”. Under normal viewing conditions, one does not confuse a fox with a dog, so foxes are not to be counted as part of the content of “X”. Moreover, if one does confuse a fox with a dog under normal viewing conditions, then perhaps one does not really have a mental representation of a dog, but maybe only a mental representation of a member of the taxonomic family canidae.

Although an appeal to normal conditions initially appears promising, it does not seem to be sufficient to rule out the causal intermediaries between objects in the environment and “X”. Even under normal conditions of viewing that include good lighting, a particular perspective, a particular viewing distance, a lack of (seriously) occluding objects, and so forth, it is still the case that both dogs and, say, retinal projections of dogs, lead to tokens of “X”. Why does the content of “X” not include retinal projections of dogs or any of the other causal intermediaries? Nor do normal conditions suffice to keep questions from getting in among the content-determining causes. What abnormal conditions are there when the question, “What kind of animal is named ‘Fido’?,” leads to a tokening of an “X” with the putative meaning of dog? Suppose there are instances of quantum mechanical fluctuations in the nervous system, wherein spontaneous changes in neurons lead to tokens of “X”. Do normal conditions block these out? So, there are problem cases in which appeals to normal conditions do not seem to work. Fodor (1990b) discusses this problem with proximal stimulations in connection with his asymmetric dependency theory, but it is one that clearly challenges the causal theory plus normal conditions approach.

Next, suppose that we tightly construe normal conditions to eliminate the kinds of problem cases described above. So, when completely fleshed out, under normal conditions only dogs cause “X”s. What one intuitively wants is to be able to say that, under normal conditions of good lighting, proper viewing distance, etc. “X” means dog. But, another possibility is that in such a situation “X” does not mean dog, but dog-under-normal-conditions-of-good-lighting, proper-viewing-distance, etc. Why take one interpretation over another? One needs a principled basis for distinguishing the cause of “X” from the many causally contributing factors. In other words, we still have the problem of bracketing non-content-determining causes, only in a slightly reformulated manner. This sort of objection may be found in Fodor (1984).

Now set the preceding problem aside. There is still another developed in Fodor (1984). Suppose that “X” does mean dog under conditions of good lighting, lack of serious occlusions, etc. Do not merely suppose that “X” is caused by dogs under conditions of good light, lack of serious occlusions, etc.; grant that “X” really does mean dog under these conditions. Even then, why does “X”, the firing of the neuronal circuit, still mean dog, when those conditions do not hold? Why does “X” still mean dog under, say, degraded lighting conditions? After all, we could abide by another apparently true conditional regarding these other conditions, namely, if the lighting conditions were not so good, there were no serious occlusions, etc., then the neuronal circuit’s firing would mean dog or fox. Even if “X” means X under one set of conditions C 1 , why doesn’t “X” mean Y under a different set of conditions C 2 ? It looks as though one could say that C 1 provides normal conditions under which “X” means X and C 2 provides normal conditions under which “X” means Y. We need some non-semantic notions to enable us to fix on one interpretation, rather than the other. At this point, one might look to a notion of functions to solve these problems. [ 7 ]

Many physical objects have functions. (Stampe (1977) was the first to note this as a fact that might help causal theories of content.) A familiar mercury thermometer has the function of indicating temperature. But, such a thermometer works against a set of background conditions which include the atmospheric pressure. The atmospheric pressure influences the volume of the vacuum that forms above the column of mercury in the glass tube. So, the height of the column of mercury is the product of two causally relevant features, the ambient atmospheric temperature and the ambient atmospheric pressure. This suggests that one and the same physical device with the same causal dependencies can be used in different ways. A column of mercury in a glass tube can be used to measure temperature, but it is possible to put it to use as a pressure gauge. Which thing a column of mercury measures is determined by its function.

This observation suggests a way to specify which causes of “X” determine its content. The content of “X”, say, the firing of some neurons, is determined by dogs, and not foxes, because it is the function of those neurons to register the presence of dogs, but not foxes. Further, the content of “X” does not include LSD, microelectrodes, or quantum mechanical fluctuations, because it is not the function of “X” to fire in response to LSD, microelectrodes, or quantum mechanical fluctuations in the brain. Similarly, the content of “X” does not include proximal sensory projections of dogs, because the function of the neurons is to register the presence of the dogs, not the sensory stimulations. It is the objective features of the world that matter to an organism, not its sensory states. Finally, it is the function of “X” to register the presence of dogs, but not the presence of questions, such as ‘What kind of animal is named “Fido”?’, that leads to “X” meaning dogs. Functions, thus, provide a prima facie attractive means of properly winnowing down the causes of “X” to those that are genuinely content determining.

In addition, the theory of evolution by natural selection apparently provides a non-semantic, non-intentional basis upon which to explicate functions and, in turn, semantic content. Individual organisms vary in their characteristics, such as how their neurons respond to features of the environment. Some of these differences in how neurons respond make a difference to an organism’s survival and reproduction. Finally, some of these very differences may be heritable. Natural selection, commonly understood as this differential reproduction of heritable variation, is purely causal. Suppose that there is a population of rabbits. Further suppose that either by a genetic mutation or by the recombination of existing genes, some of these rabbits develop neurons that are wired into their visual systems in such a way that they fire (more or less reliably) in the presence of dogs. Further, the firing of these neurons is wired into a freezing behavior in these rabbits. Because of this configuration, the rabbits with the “dog neurons” are less likely to be detected by dogs, hence more likely to survive and reproduce. Finally, because the genes for these neurons are heritable, the offspring of these dog-sensitive rabbits will themselves be dog-sensitive. Over time, the number of the dog-sensitive rabbits will increase, thereby displacing the dog-insensitive rabbits. So, natural selection will, in such a scenario, give rise to mental representations of dogs. Insofar as such a story is plausible, there is hope that natural selection and the genesis of functions can provide a naturalistically acceptable means of delimiting content-determining causes.

3.2.1 Objections to Evolutionary Functions

There is no doubt that individual variation, differential reproduction, and inheritance can be understood in a purely causal manner. Yet, there remains skepticism about how naturalistically one can describe what natural selection can select for. There are doubts about the extent to which the objects of selection really can be specified without illicit importation of intentional notions. Fodor (1989, 1990a) give voice to some of this skepticism. Prima facie , it makes sense to say that the neurons in our hypothetical rabbits fire in response to the presence of dogs, hence that there is selection for dog representations. But, it makes just as much sense, one might worry, to say that it is sensitivity to dog-look-alikes that leads to the greater fitness of the rabbits with the new neurons. [ 8 ] There are genes for the dog-look-alike neurons and these genes are heritable. Moreover, those rabbits that freeze in response to dog-look-alikes are more likely to survive and reproduce than are those that do not so freeze, hence one might say that the freezing is in response to dog-look-alikes. So, our ability to say that the meaning of the rabbits’ mental representation “X” is dog, rather than dog-look-alike, depends on our ability to say that it is the dog-sensitivity of “X”, rather than the dog-look-alike-sensitivity of “X”, that keeps the rabbits alive longer. Of course, being dog-sensitive and being dog-look-alike-sensitive are connected, but the problem here is that both being dog-look-alike-sensitive and being dog-sensitive can increase fitness in ways that lead to the fixation of a genotype. And it can well be that it is avoidance of dogs that keeps a rabbit alive, but one still needs some principled basis for saying that the rabbits avoid dogs by being sensitive to dogs, rather than by being sensitive to dog-look-alikes. The latter appears to be good enough for the differential reproduction of heritable variation to do its work. Where we risk importing semantic notions into the mix is in understanding selection intentionally, rather than purely causally. We need a notion of “selection for” that is both general enough to work for all the mental contents causal theorists aspire to address and that does not tacitly import semantic notions.

In response to this sort of objection, it has been proposed that the correct explanation of a rabbit’s evolutionary success with, say, “X”, is not that this enables the rabbit to avoid dog-look-alikes, but that it enables them to avoid dogs. It is dogs, but not mere dog-look-alikes, that prey on rabbits. (This sort of response is developed in Millikan (1991) and Neander (1995).) Yet, the rejoinder is that if we really want to get at the correct explanation of a rabbit-cum-“X” system, then we should not suppose that “X” means dog. Instead, we should say that it is in virtue of the fact that “X” picks up on something like, say, predator of such and such characteristics that the “X” alarm system increases the chance of a rabbit’s survival. (This sort of rejoinder may be found in Agar (1993).)

This problem aside, there is also some concern about the extent to which it is plausible to suppose that natural selection could act on the fine details of the operation of the brain, such as the firing of neurons in the presence of dogs. (This is an objection raised in Fodor (1990c)). Natural selection might operate to increase the size of the brain so there is more cortical mass for cognitive processing. Natural selection might also operate to increase the folding of the brain so as to maximize the cortical surface area that can be contained within the brain. Natural selection might also lead to compartmentalization of the brain, so that one particular region could be dedicated to visual processing, another to auditory processing, and still another to face processing. Yet, many would take it to be implausible to suppose that natural selection works at the level of individual mental representations. The brain is too plastic and there is too much individual variation in the brains of mammals to admit of selection acting in this way. Moreover, such far reaching effects of natural selection would lead to innate ideas not merely of colors and shapes, but of dogs, cats, cars, skyscrapers, and movie stars. Rather than supposing that functions are determined by natural selection across multiple generations, many philosophers contend that it is more plausible that the functions that underlie mental representations are acquired through cognitive development.

Hypothesizing that certain activities or events within the brain mean what they do, in part, because of some function that develops over the course of an individual’s lifetime shares many of the attractive features of the hypothesis that these same activities or events mean what they do, in part, because of some evolutionarily acquired function. One again can say that it is not the function of “X” to register the presence of LSD, microelectrodes, foxes, stuffed dogs, or paper mâché dogs, or questions, but it is their function to report on dogs. Moreover, it does not invoke dubious suppositions about an intimate connection between natural selection and the precise details of neuronal hardware and its operation. A functional account based on ontogenetic function acquisition or learning seems to be an improvement. This is the core of the approach taken in Dretske (1981; 1988).

The function acquisition story proposes that during development, an organism is trained to discriminate real flesh and blood dogs from questions, foxes, stuffed dogs, paper mâché dogs under conditions of good lighting, without occlusions, or distractions. A teacher ensures that training proceeds according to plan. Once “X” has acquired the function to respond to dogs, the training is over. Thereafter, any instances in which “X” is triggered by foxes, stuffed dogs, paper mâché dogs, LSD, microelectrodes, etc., are false tokenings and figure into false beliefs.

3.3.1 Objections to Developmental Functions

Among the most familiar objections to this proposal is that there is no principled distinction between when a creature is learning and when it is done learning. Instances in which a creature entertains the hypothesis that “X” means X, instances in which the creature entertains the hypothesis that “X” means Y, instances in which the creature straightforwardly uses “X” to mean X, and instances in which the creature straightforwardly uses “X” to mean Y are thoroughly intermingled. The problem is perhaps more clearly illustrated with tokens of natural language, where children will use words struggling through correct and incorrect uses of a word before (perhaps) finally settling on a correct usage. There seems to be no principled way to specify if learning has stopped or whether there is instead “lifelong learning”. This is among the objections to be found in Fodor (1984).

This, however, is a relatively technical objection. Further reflection suggests that there may be an underlying appeal to the intentions of the teacher. Let us revisit the learning story. Suppose that during the learning period the subject is trained to use “X” as a mental representation of dogs. Now, let the student graduate from “X” using school and immediately thereafter see a fox. Seeing this fox causes a token of “X” and one would like to say that this is an instance of mistaking a fox for a dog, hence a false tokening. But, consider the situation counterfactually. If the student had seen the fox during the training period just before graduation, the fox would have triggered a token of “X”. This suggests that we might just as well say that the student learned that “X” means fox or dog as that the student learned that “X” means dog. Thus, we might just as well say that, after training, the graduate does not falsely think of a dog, but truly thinks of a fox or a dog. The threat of running afoul of naturalist scruples comes if one attempts to say, in one way or another, that it is because the teacher meant for the student to learn that “X” means dog, rather than “X” means fox or dog. The threatened violation of naturalism comes in invoking the teacher’s intentions. This, too, is an objection to be found in Fodor (1984).

The preceding attempts to distinguish the content-determining causes from non-content-determining causes focused on the background or boundary conditions under which the distinct types of causes may be thought to act. Fodor’s Asymmetric Dependency Theory (ADT), however, represents a bold alternative to these approaches. Although Fodor (1987, 1990a, b, 1994) contain numerous variations on the details of the theory, the core idea is that the content-determining cause is in an important sense fundamental, where the non-content-determining causes are non-fundamental. The sense of being fundamental is that the non-content-determining causes depend on the content-determining cause; the non-content-determining causes would not exist if not for the content-determining cause. Put a bit more technically, there are numerous laws such as ‘Y 1 causes “X”,’ ‘Y 2 causes “X”,’ etc., but none of these laws would exist were it not a law that X causes “X”. The fact that the ‘X causes “X”’ law does not in the same way depend on any of the Y 1 , Y 2 , …, Y n laws makes the dependence asymmetric. Hence, there is an asymmetric dependency between the laws. The intuition here is that the question, ‘What kind of animal is called “Fido”?’ will cause an occurrence of the representation “X” only because of the fact that dogs cause “X”. Instances of foxes cause instances of “X” only because foxes are mistaken for dogs and dogs cause instances of “X”.

Causation is typically understood to have a temporal dimension. First there is event C and this event C subsequently leads to event E. Thus, when the ADT is sometimes referred to as the “Asymmetric Causal Dependency Theory,” the term “causal” might suggest a diachronic picture in which there is, first, an X-“X” law which subsequently gives rise to the various Y-“X” laws. Such a diachronic interpretation, however, would lead to counterexamples for the ADT approach. Fodor (1987) discusses this possibility. Consider Pavlovian conditioning. Food causes salivation in a dog. Then a bell causes salivation in the dog. It is likely that the bell causes salivation only because the food causes it. Yet, salivation hardly means food. It may well naturally mean that food is present, but salivation is not a thought or thought content and it is not ripe for false semantic tokening. Or take a more exotic kind of case. Suppose that one comes to apply “X” to dogs, but only by means of observations of foxes. This would be a weird case of “learning”, but if things were to go this way, one would not want “X” to mean fox. To block this kind of objection, the theory maintains the dependency between the fundamental X-“X” law and the non-fundamental Y-“X” laws is synchronic. The dependency is such that if one were to break the X-“X” law at time t, then one would thereby instantaneously break all the Y-“X” laws at that time.

The core of ADT, therefore, comes down to this. “X” means X if

  • ‘Xs cause “X”s’ is a law,
  • For all Ys that are not Xs, if Ys qua Ys actually cause “X”s, then the Y’s causing “X”s is asymmetrically dependent on the Xs causing “X”s,
  • The dependence in (2) is synchronic (not diachronic).

This seems to get a number of cases right. The reason that questions like “What kind of animal is named ‘Fido’?” or “What is a Sheltie?” trigger “X”, meaning dog, is that dogs are able to trigger “X”s. Foxes only trigger “X”s, meaning dog, because dogs are able to trigger them. Moreover, it appears to solve the disjunction problem. Suppose we have a ‘dogs cause “X”s’ law and a ‘dogs or foxes cause “X”s’ law. If one breaks the ‘dogs cause “X”s’ law, then one thereby breaks the ‘dogs or foxes cause “X”s’ law, since the only reason either dogs or foxes cause “X”s is because dogs do. Moreover, if one breaks the ‘dogs or foxes cause “X”s’ law, one does not thereby break the ‘dogs cause “X”s’ law, since dogs alone might suffice to cause “X”s. So, the ‘dogs or foxes cause “X”s’ law depends on the ‘dogs cause “X”s’ law, but not vice versa. Asymmetric dependency of laws gives the right results. [ 9 ]

3.4.1 Objections to ADT

Adams and Aizawa (1994) mention an important class of causes that the ADT does not appear to handle, namely, the “non-psychological interventions”. We have all along assumed the “X” is some sort of brain event, such as the firing of some neurons. But, it is plausible that some interventions, such as a dose of hallucinogen or maybe some carefully placed microelectrodes, could trigger such brain events, quite apart from the connection of those brain events to other events in the external world. If essentially all brain events are so artificially inducible, then it would appear that for all putative mental representations, there will be some laws, such as ‘microelectrodes cause “X”s,’ that do not depend on laws such as ‘dogs causes “X”s.’ If this is the case, then the second condition of the ADT would rarely or never be satisfied, so that the theory would have little relevance to actual cognitive scientific practice.

Fodor (1990a) discusses challenges that arise with the fact that the perception of objects involves causal intermediaries. Suppose that there is a dog-“X” law that is mediated entirely by sensory mechanisms. In fact, suppose unrealistically that the dog-“X” law is mediated by a single visual sensory projection. In other words let the dog-“X” law be mediated by the combination of a dog-dog sp law and a dog sp -“X” law. Under these conditions, it appears that “X” means dog sp , rather than dog. Condition (1) is satisfied, since there is a dog sp -“X” law. Condition (2) is satisfied, since if one were to break the dog sp -“X” law one would thereby break the dog-“X” law (i.e., there is a dependence of one law one the other) and breaking the dog-“X” law would not necessarily break the dog sp -“X” law (i.e., the dependence is not symmetric). The dependence is asymmetric, because one can break the dog-“X” law by breaking the dog-dog sp law (by changing the way dogs look) without thereby breaking the dog sp -“X” law. Finally, condition (3) is satisfied, since the dependence of the dog-“X” law on the dog sp -“X” law is synchronic.

The foregoing version of the sensory projections problem relies on what was noted to be the unrealistic assumption that the dog-“X” law is mediated by a single visual sensory projection. Relaxing the assumption does not so much solve the problem as transform it. So, adopt the more realistic assumption that the dog-“X” law is sustained by a combination of a large set of dog-sensory projection laws and a large set of dog sp -“X” laws. In the first set, we have laws connecting dogs to particular patterns of retinal stimulation, laws connecting dogs to particular patterns of acoustic stimulation, etc. In the second set, we have certain psychological laws connecting particular patterns of retinal stimulation to “X”, certain psychological laws connecting particular patterns of acoustic stimulation to “X”, etc. In this sort of situation, there threatens to be no “fundamental” law, no law on which all other laws asymmetrically depend. If one breaks the dog-“X” law one does not thereby break any of the sensory projection-“X” laws, since the former can be broken by dissolving all of the dog-sensory projection laws. If, however, one breaks any one of the particular dog sp -“X” laws, e.g. one connecting a particular doggish visual appearance to “X”, one does not thereby break the dog-“X” law. The other sensory projections might sustain the dog-“X” law. Moreover, breaking the law connecting a particular doggish look to “X” will not thereby break a law connecting a particular doggish sound to “X”. Without a “fundamental” law, there is no meaning in virtue of the conditions of the ADT. Further, the applicability of the ADT appears to be dramatically reduced insofar as connections between mental representations and properties in the world are mediated by sensory projections. (See Neander, 2013, Schulte, 2018, Artiga & Sebastián, 2020, for discussion of the distality problem for other causal theories.)

Another problem arises with items or kinds that are indistinguishable. Adams and Aizawa (1994), and, implicitly, McLaughlin, (1991), among others, have discussed this problem. As one example, consider the time at which the two minerals, jadeite and nephrite, were chemically indistinguishable and were both thought to be jade. As another, one might appeal to H 2 O and XYZ (the stuff of philosophical thought experiments, the water look-alike substance found on twin-earth). Let X = jadeite and Y = nephrite and let there be laws ‘jadeite causes “X”’ and ‘nephrite causes “X”’. Can “X” mean jadeite? No. Condition (1) is satisfied, since it is a law that ‘jadeite causes “X”’. Condition (3) is satisfied, since breaking the jadeite-“X” law will immediately break the nephrite-“X” law. If jadeite cannot trigger an “X”, then neither can nephrite, since the two are indistinguishable. That is, there is a synchronic dependence of the ‘nephrite causes “X”’ law on the ‘jadeite causes “X” law. The problem arises with condition (2). Breaking the jadeite-“X” law will thereby break the nephrite-“X” law, but breaking the nephrite-“X” law will also thereby break the jadeite-“X” law. Condition (2) cannot be satisfied, since there is a symmetric dependence between the jadeite-“X” law and the nephrite-“X” law. By parity of reasoning, “X” cannot mean nephrite. So, can “X” mean jade? No. As before, conditions (1) and (3) could be satisfied, since there could be a jade-“X” law and the jadeite-“X” law and the nephrite-“X” law could synchronically depend on it. The problem is, again, with condition (2). Presumably breaking the jade-“X” law would break the jadeite-“X” and nephrite-“X” law, but breaking either of them would break the jade-“X” law. The problem is, again, with symmetric dependencies.

Here is a problem that we earlier found in conjunction with other causal theories. Despite the bold new idea underlying the ADT method of partitioning off non-content-determining causes, it too appears to sneak in naturalistically unacceptable assumptions. Like all causal theories of mental content, the asymmetric causal dependencies are supposed to be the basis upon which meaning is created; the dependencies are not themselves supposed to be a product, or byproduct, of meaning. Yet, ADT appears to violate this naturalistic pre-condition for causal theories. (This kind of objection may be found in Seager (1993), Adams & Aizawa (1994), (1994b), Wallis (1995), and Gibson (1996)). Ys are supposed to cause “X”s only because Xs do and this must not be because of any semantic facts about “X”s. So, what sort of mechanism would bring about such asymmetric dependencies among things connected to the syntactic item “X”? In fact, why wouldn’t lots of things be able to cause “X”s besides Xs, quite independently of the fact that Xs do? The instantiation of “X”s in the brain is, say, some set of neurochemical events. There should be natural causes capable of producing such events in one’s brain under a variety of circumstances. Why on earth would foxes be able to cause the neurochemical “X” events in us only because dogs can? One might be tempted to observe that “X” means dog, “Y” means fox, we associate foxes with dogs and that is why foxes cause “X”s only because dogs cause “X”s. We would not associate foxes with “X”s unless we associated “X”s with dogs and foxes with dogs. This answer, however, involves deriving the asymmetric causal dependencies from meanings, which violates the background assumption of the naturalization project. Unless there is a better explanation of such asymmetrical dependencies, it may well be that the theory is misguided in attempting to rest meaning upon them.

A relatively more recent causal theory is Robert Rupert’s (1999) Best Test Theory (BTT) for the meanings of natural kind terms. Unlike most causal theories, this one is restricted in scope to just natural kinds and terms for natural kinds. To mark this restriction, we will let represented kinds be denoted by K’s, rather than our usual X’s.

Best Test Theory : If a subject S bears no extension-fixing intentions toward “X” and “X” is an atomic natural kind term in S’s language of thought (i.e., not a compound of two or more other natural kind terms), then “X” has as its extension the members of natural kind K if and only if members of K are more efficient in their causing of “X” in S than are the members of any other natural kind.

To put the idea succinctly, “X” means, or refers to, those things that are the most powerful stimulants of “X”. That said, we need an account of what it is for a member of a natural kind to be more efficient in causing “X”s than are other natural kinds. We need an account of how to measure the power of a stimulus. This might be explained in terms of a kind of biography.

Figure 1. A spreadsheet biography

Consider an organism S that (a) causally interacts with three different natural kinds, K 1 -K 3 , in its environment and (b) has a language of thought with five terms “X 1 ”-“X 5 ”. Further, suppose that each time S interacts with an individual of kind K i this causes an occurrence of one or more of “X 1 ”-“X 5 ”. We can then create a kind of “spreadsheet biography” or “log of mental activity” for S in which there is a column for each of “X 1 ”-“X 5 ” and a row for each instance in which a member of K 1 -K 3 causes one or more instances of “X 1 ”-“X 5 ”. Each mental representation “X i ” that K i triggers receives a “1” in its column. Thus, a single spreadsheet biography might look like that shown in Figure 1.

To determine what a given term “X i ” means, we find the kind K i that is most effective at causing “X i ”. This can be computed from S’s biography. For each K i and “X i ”, we compute the frequency with which K i triggers “X i ”. “X 1 ” is tokened four out of six times that K 1 is encountered, three out of three times that K 2 is encountered, and one out of four times that K 3 is encountered. “X i ” means that K i that has the highest sample frequency. Thus, in this case, “X 1 ” means K 2 . Just to be clear, when BTT claims that “X i ” means the K i that is the most powerful stimulant of “X”, this is not to say that “X” means the most common stimulant of “X”. In our spreadsheet biography, K 1 is the most common stimulant of “X 1 ”, since it triggers “X 1 ” four times, where K 2 triggers it only three times, and K 3 triggers it only one time. This is why, according to BTT, “X 1 ” means K 2 , rather than K 1 or K 3 .

How does the BTT handle our range of test cases? Consider, first, the standard form of the disjunction problem, the case of “X” meaning dog, rather than dog or fox-on-a-dark-night-at-a-distance. Since the latter is not apparently a natural kind, “X” cannot mean that. [ 10 ] Moreover, “X” means dog, rather than fox, because the only times the many foxes that S encounters can trigger “X 1 ”s is on dark nights at a distance, where dogs trigger “X”s more consistently under a wider range of conditions.

How does the BTT address the apparent problem of “brain interventions,” such as LSD, microelectrodes, or brain tumors? The answer is multi-faceted. The quickest method for taking much of the sting out of these cases is to note that they generally do not arise for most individuals. The Best Test Theory relies on personal biographies in which only actual instances of kinds triggering mental representations are used to specify causal efficiency. The counterfactual truth that, were a stimulating microelectrode to be applied to, say, a particular neuron, it would perfectly reliably produce an “X” token simply does not matter for the theory. So, for all those individuals who do not take LSD, do not have microelectrodes inserted in their brains, do not have brain tumors, etc., these sorts of counterfactual possibilities are irrelevant. A second line of defense against “brain interventions” appeals to the limitation to natural kinds. The BTT might set aside microelectrodes, since they do not constitute a natural kind. Maybe brain tumors are; maybe not. Unfortunately, however, LSD is a very strong candidate for a chemical natural kind. Still the BTT is not without a third line of defense for handling these cases. One might suppose that LSD and brain tumors act on the brain in a rather diffuse manner. Sometimes a dose of LSD triggers “X i ”, another time it triggers “X j ”, and another time it triggers “X k ”. One might then propose that, if one counts all these episodes with LSD, none of these will act often enough on, say, “X i ” to get it to mean LSD, rather than, say, dog. This is the sort of strategy that Rupert invokes to keep mental symbols from meaning omnipresent, but non-specific causes such as the heart. The heart might causally contribute to “X 1 ”, but it also contributes to so many other “X i ”s, that the heart will turn out not to be the most efficient cause of “X 1 ”.

What about questions? Presumably questions as a category will count as an instance of a linguistic natural kind. Moreover, particular sentences will also count. So, the restriction of the BTT to natural kinds is of little use here. So, what of causal efficiency? Many sentences appear to provoke a wide range of possible responses. In response to, “I went to the zoo last week,” S could think of lions, tigers, bear, giraffes, monkeys, and any number of other natural kinds. But, the question, “What animal goes ‘oink, oink’?”—perhaps uttered in “Motherese” in a clear deliberate fashion so that it is readily comprehensible to a child—will be rather efficient in generating thoughts of a pig. Moreover, it could be more efficient than actual pigs, since a child might have more experience with the question than with actual pigs, often not figuring out that actual pigs are pigs. In such situations, “pig” would turn out to mean “What animal goes ‘oink, oink’?,” rather than pig. So, there appear to be cases in which BTT could make prima facie incorrect content assignments.

What, finally, of proximal projections of natural kinds? One plausible line might be to maintain that proximal projections of natural kinds are not themselves natural kinds, hence that they are automatically excluded from the scope of the theory. This plausible line, however, might be the only available line. Presumably, in the course of S’s life, the only way dogs can cause “X”s is by way of causal mediators between the dogs and the “X”s. Thus, each episode in which a dog causes an “X” is also an episode in which a sensory projection of a dog causes an “X”. So, dog efficiency for “X” can be no higher the efficiency of dog sensory projections. And, if it is possible for there to be a sensory projection of a dog without there being an actual dog, then the efficiency of the projections would be greater than the efficiency of the dogs. So, “X” could not mean dog. But, this problem is not necessarily damaging to BTT.

Since the BTT has not received a critical response in the literature, we will not devote a section to objections to it. Instead, we will leave well enough alone with our somewhat speculative treatment of how BTT might handle our familiar test cases. The general upshot is that the combination of actual causal efficiency over the course of an individual’s lifetime along with the restriction to natural kinds provides a surprisingly rich means of addressing some long-standing problems.

4. General Objections to Causal Theories of Mental Content

In the preceding section, we surveyed issues that face the philosopher attempting to work out the details of a causal theory of mental content. These issues are, therefore, one might say, internal to causal theories. In this section, however, we shall review some of the objections that have been brought forward to the very idea of a causal theory of mental content. As such, these objections might be construed as external to the project of developing a causal theory of mental content. Some of these are coeval with causal theories and have been addressed in the literature, but some are relatively recent and have not been discussed in the literature. The first objections, discussed in subsections 4.1–4.4, in one way or another push against the idea that all content could be explained by appeal to a causal theory, but leave open the possibility that one or another causal theory might provide sufficiency conditions for meaning. The last objections, those discussed in subsections 4.5–4.6 challenge the ability of causal theories to provide even sufficiency conditions for mental content.

One might think that the meanings of terms that denote mathematical or logical relations could not be handled by a causal theory. How could a mental version of the symbol “+” be causally connected to the addition function? How could a mental version of the logical symbol “¬” be causally connected to the negation truth function? The addition function and the negation function are abstract objects. To avoid this problem, causal theories typically acquiesce and maintain that their conditions are merely sufficient conditions on meaning. If an object meets the conditions, then that object bears meaning. But, the conditions are not necessary for meaning, so that representations of abstract objects get their meaning in some other way. Perhaps conceptual role semantics, wherein the meanings of terms are defined in terms of the meanings of other terms, could be made to work for these other theories.

Another class of potential problem cases are vacuous terms. So, for example, people can think about unicorns, fountains of youth, or the planet Vulcan. Cases such as these are discussed in Stampe (1977) and Fodor (1990a), among other places. These things would be physical objects were they to exist, but they do not, so one cannot causally interact with them. In principle, one could say that thoughts about such things are not counterexamples to causal theories, since causal theories are meant only to offer sufficiency conditions for meaning. But, this in principle reply appears to be ad hoc. It is not warranted, for example, by the fact that these excluded meanings involve abstract objects. There are, however, a number of options that might be explored here.

One strategy would be to turn to the basic ontology of one’s causal theory of mental content. This is where a theory based on nomological relations might be superior to a version that is based on causal relations between individuals. One might say that there can be a unicorn-“unicorn” law, even if there are no actual unicorns. This story, however, would break down for mental representations of individuals, such as the putative planet Vulcan. There is no law that connects a mental representation to an individual; laws are relations among properties.

Another strategy would be to propose that some thought symbols are complex and can decompose into meaningful primitive constituents. One could then allow that “X” is a kind of abbreviation for, or logical construction of, or defined in terms of “Y1,” “Y2,” and “Y3,” and that a causal theory applies to “Y1,” “Y2,” and “Y3.” So, for example, one might have a thought of a unicorn, but rather than having a single unicorn mental representation there is another representation made up of a representation of a horse, a representation of a horn, and a representation of the relationship between the horse and the horn. “Horse”, “horn”, and “possession,” may then have instantiated properties as their contents.

Horgan and Tienson (2002) object to what they describe as “strong externalist theories” that maintain that causal connections are necessary for content. They argue, first, that mental life involves a lot of intentional content that is constituted by phenomenology alone. Perceptual states, such as seeing a red apple, are intentional. They are about apples. Believing that there are more than 10 Mersenne primes and hoping to discover a new Mersenne prime are also intentional states, in this case about Mersenne primes. But, all these intentional states have a phenomenology—something it is like to be in these states. There is something it is like to see a red apple, something different that it is like to believe there are more than 10 Mersenne primes, and something different still that it is like to hope to discover a new Mersenne prime. Horgan and Tienson propose that there can be phenomenological duplicates—two individuals with exactly the same phenomenology. Assume nothing about these duplicates other than that they are phenomenological duplicates. In such a situation, one can be neutral regarding how much of their phenomenological experience is veridical and how much illusory. So, one can be neutral on whether or not a duplicate sees a red apple or whether there really are more than 10 Mersenne primes. This suggests that there is a kind of intentionality—that shared by the duplicates—that is purely phenomenological. Second, Horgan and Tienson argue that phenomenology constitutively depends only on narrow factors. They observe that one’s experiences are often caused or triggered by events in the environment, but that these environmental causes are only parts of causal chains that lead to the phenomenology itself. They do not constitute that phenomenology. The states that constitute, or provide the supervenience base for, the phenomenology are not the elements of the causal chain leading back into the environment. If we combine the conclusions of these two arguments, we get Horgan and Tienson’s principal argument against any causal theory that would maintain that causal connections are necessary for content.

P1. There is intentional content that is constituted by phenomenology alone. P2. Phenomenology is constituted only by narrow factors.
C. There is intentional content that is constituted only by narrow factors.

Thus, versions of causal theories that suppose that all content must be based on causal connections are fundamentally mistaken. For those versions of causal theories that offer only sufficiency conditions on semantic content, however, Horgan and Tienson’s argument may be taken to provide a specific limitation on the scope of causal theories, namely, that causal theories do not work for intentional content that is constituted by phenomenology alone.

A relatively familiar challenge to this argument may be found in certain representational theories of phenomenological properties. (See, for example, Dretske (1988) and Tye (1997).) According to these views, the phenomenology of a mental state derives from that state’s representational properties, but the representational properties are determined by external factors, such as the environment in which an organism finds itself. Thus, such representationalist theories challenge premise P2 of Horgan and Tienson’s argument.

Buras (2009) presents another argument that is perhaps best thought of as providing a novel reason to think that causal theories of mental representation only offer sufficiency conditions on meaning. This argument begins with the premise that some mental states are about themselves. To motivate this claim, Buras notes that some sentences are about themselves. So, by analogy with, “This sentence is false,” which is about itself, one might think that there is a thought, “This thought is false,” that is also about itself. Or, how about “This thought is realized in brain tissue” or “This thought was caused by LSD”? These appear to be about themselves. Buras’ second premise is that nothing is a cause of itself. So, “This thought is false” is about itself, but could not be caused by itself. So, the sentence “This thought is false” could not mean that it itself is false in virtue of the fact that “This thought is false” was caused by its being false. So, “This thought is false” must get its meaning in some other way. It must get its meaning in virtue of some other conditions of meaning acquisition.

This is not, however, exactly the way Buras develops his argument. In the first place, he treats causal theories of mental content as maintaining that, if “X” means X, then X causes “X”. (Cf. Buras, 2009, p. 118). He cites Stampe (1977), Dretske (1988), and Fodor (1987) as maintaining this. Yet, Stampe, Dretske, and Fodor explicitly formulate their theories in terms of sufficiency conditions, so that (roughly) “X” means X, if Xs causes “X”s, etc. (See, for example, Stampe (1977), pp. 82–3, Dretske (1988), p. 52, and Fodor (187), p. 100). In the second place, Buras seems to draw a conclusion that is orthogonal to the truth or falsity of causal theories of mental content. He begins his paper with an impressively succinct statement of his argument.

Some mental states are about themselves. Nothing is a cause of itself. So some mental states are not about their causes; they are about things distinct from their causes (Buras, 2009, p. 117).

The causal theorist can admit that some mental states are not about their causes, since some states are thoughts and thoughts mean what they do in virtue of, say, the meanings of mental sentences. These mental sentences might mean what they do in virtue of the meanings of primitive mental representations (which may or may not mean what they do in virtue of a causal theory of meaning) and the way in which those primitive mental representations are put together. As was mentioned in section 2, such a syntactically and semantically combinatorial language of thought is a familiar background assumption for causal theories. The conclusion that Buras may want, instead, is that there are some thoughts that do not mean what they do in virtue of what causes them. So, through some slight amendments, one can understand Buras to be presenting a clarification of the scope of causal theories of mental content or as a challenge to a particularly strong version of causal theories, a version that takes them as offering a necessary condition on meaning.

4.5 Causal Theories do Not Work for Reliable Misrepresentations

As noted above, one of the central challenges for causal theories of mental content has been to discriminate between a “core” content-determining causal connection, as between cows and “cow”s, and “peripheral” non-content-determining causal connections, as between horses and “cow”s. Cases of reliable misrepresentations are representations which always misrepresent in the same way. In such cases, there is supposed to be no “core” content-determining causal connection; there are no X’s to which “X”s are causally connected. Instead, there are only “peripheral” causal connections. Mendelovici, (2013), following a discussion by Hohman, (2002), suggests that color representations may be like this. [ 11 ] Color anti-realism, according to which there are no colors in the world, seems to be committed to the view that color representations are not caused by colors in the world. Color representations may be reliably tokened by something in the world, but not by colors that are in the world.

In some instances, reliable misrepresentations provide another take on some of the familiar content-determination problems. So, take attempts to use normal conditions to distinguish between content-determining causes and non-content-determining causes. Even in normal conditions, color representations are not caused by colors, but by, say, surface reflectances under certain conditions of illumination, just in the way that, even in normal conditions cow representations are sometimes not caused by cows, but by, say, a question such as, “What kind of animal is sometimes named ‘Bessie’?” Take a version of the asymmetric dependency theory. On this theory applied to color terms, it might seem that there is no red-to-“red” law on which all the other laws depends in much the same way that it might seem there is no unicorn-to-“unicorn” law on which all other laws depends. (Cf. Fodor (1987, pp. 163–4) and (1990, pp. 100–1)).

Unlike the more familiar cases, Mendelovici, (2013), does not argue that there actually are such problematic cases. The argument is not that there are actual cases of reliable misrepresentations, but merely that reliable misrepresentations are possible and that this is enough to create trouble for causal theories of mental representation. One sort of trouble stems from the need for a pattern of psychological explanation. Let a mental representation “X” mean intrinsically-heavy. Such a representation is a misrepresentation, since there is no such property of being intrinsically heavy. Such a misrepresentation is, nonetheless, reliable (i.e. consistent), since it is consistently tokened by all the same sort of things on earth. But, one can see how an agent using “X” could make a reasonable, yet mistaken, inference to the conclusion that an object that causes a tokening of “X” on earth would be hard to lift on the moon. To allow such a pattern of explanation, Mendelovici argues, a causal theorist must allow for reliable misrepresentation. A theory of what mental representations are should not preclude such patterns of explanation. Another sort of trouble stems from the idea that if a theory of meaning does not allow for reliable misrepresentation, but requires that there be a connection between “X”s and Xs, then this would be constitute a commitment to a realist metaphysics for Xs. While there can be good reasons for realism, the needs of a theory of content would not seem to be a proper source for them.

Artiga, 2013, provides a defense of teleosemantic theories in the face of Mendelovici’s examples of reliable misrepresentation. Some of Artiga’s arguments might also be used by advocates of causal theories of mental content. Mendelovici, (2016), replies to Artiga, 2013, by providing refinements and a further defense of the view that reliable misrepresentations are a problem for causal theories of mental content.

Cummins (1997) argues that causal theories of mental content are incompatible with the fact that one’s perception of objects in the physical environment is typically mediated by a theory. His argument proceeds in two stages. In one stage, he argues that, on a causal theory, for each primitive “X” there must be some bit of machinery or mechanism that is responsible for detecting Xs. But, since a finite device, such as the human brain, contains only a finite amount of material, it can only generate a finite number of primitive representations. Next, he observes that thought is productive—that it can, in principle, generate an unbounded number of semantically distinct representations. This means that to generate the stock of mental representations corresponding to each of these distinct thoughts, one must have a syntactically and semantically combinatorial system of mental representation of the sort found in a language of thought (LOT). More explicitly, this scheme of mental representation must have the following properties:

  • It has a finite number of semantically primitive expressions.
  • Every expression is a concatenation of one or more primitive expressions.
  • The content of any complex expression is a function of the contents of the primitives and the way those primitives are concatenated into the whole expression.

The conclusion of this first stage is, therefore, that a causal theory of mental representation requires a LOT. In the other stage of his argument, Cummins observes that, for a wide range of objects, their perception is mediated by a body of theory. Thus, to perceive dogs—for dogs to cause “dogs”—one has to know things such as that dogs have tails, dogs have fur, and dogs four legs. But, to know that dogs have tails, fur, and four legs, one needs a set of mental representations, such as “tail”, “fur”, “four”, and “legs”. Now the problem fully emerges. According to causal theories, having an “X” representation requires the ability to detect dogs. But, the ability to detect dogs requires a theory of dogs. But, having a theory of dogs requires already having a LOT—a system of mental representation. One cannot generate mental representations without already having them. [ 12 ]

Jason Bridges (2006) argues that the core hypothesis of informational semantics conflicts with the idea that psychological laws are non-basic. As we have just observed, causal theories are often taken to offer mere sufficiency conditions for meaning. Suppose, therefore, that we suitably restrict the scope of a causal theory and understand its core hypothesis as asserting that all “X”s with the content X are reliably caused by X. (Nothing in the logic of Bridges’ argument depends on any additional conditions on a putative causal theory of mental content, so for simplicity we can follow Bridges in restricting attention to this simple version.) Bridges proposes that this core claim of a causal theory of mental content is a constitution thesis. It specifies what constitutes the meaning relation (at least in some restricted domain). Thus, if one were to ask, “Why is it that all ‘X’s with content X are reliably caused by Xs?,” the answer is roughly, “That’s just what it is for ‘X’ to have the content X”. Being caused in that way is what constitutes having that meaning. So, when a theory invokes this kind of constitutive relation, there is this kind of constitutive explanation. So, the first premise of Bridges’ argument is that causal theories specify a constitutive relation between meaning and reliable causal connection.

Bridges next observes that causal theorists typically maintain that the putative fact that all “X”s are reliably caused by Xs is mediated by underlying mechanisms of one sort or another. So, “X”s might be reliably caused by dogs in part through the mediation of a person’s visual system or auditory system. One’s visual apparatus might causally connect particular patterns of color and luminance produced by dogs to “X”s. One might put the point somewhat differently by saying that a causal theorist’s hypothetical “Xs causes ‘X’s” law is not a basic or fundamental law of nature, but an implemented law.

Bridges’ third premise is a principle that he takes to be nearly self-evident, once understood. We can develop a better first-pass understanding of Bridges’ argument if, at the risk of distorting the argument, we consider a slightly simplified version of this principle:

(S) If it is a true constitutive claim that all f s are g s, then it’s not an implemented law that all f s are g s.

To illustrate the principle, suppose we say that gold is identical to the element with atomic number 79, that all gold has atomic number 79. Then suppose one were to ask, “Why is it that all gold has the atomic number 79?” The answer would be, “Gold just is the element with atomic number 79.” This would be a constitutive explanation. According to (S), however, this constitutive explanation precludes giving a further mechanistic explanation of why gold has atomic number 79. There is no mechanism by which gold gets atomic number 79. Having atomic number 79 just is what makes gold gold.

So, here is the argument

P1. It is a true constitutive claim that all “X”s with content X are reliably caused by Xs. P2. If it is a true constitutive claim that all “X”s with content X are reliably caused by Xs, then it is not an implemented law that all “X”s with content X are reliably caused by Xs.

Therefore, by modus ponens on P1 and P2,

C1. It is not an implemented law that all “X”s with content X are reliably caused by Xs.

But, C1 contradicts the common assumption

P3. It is an implemented law that all “X”s with content X are reliably caused by Xs. [ 13 ]

Rupert (2008) challenges the first premise of Bridges’ argument on two scores. First, he notes that claims about constitutive natures have modal implications which at least some naturalistic philosophers have found objectionable. Second, he claims that natural scientists do not appeal to constitutive natures, so that one need not develop a theory of mental content that invokes them.

In discussing informational variants of causal theories, Artiga & Sebastián, 2020, have proposed a new “metasemantic” problem. They correctly note that there is a difference between explaining why “X” means X, rather than Y, and explaining why “X” has a meaning at all. Moreover, they correctly note that having an answer to this first question does not necessarily answer the second question. But, it is unclear that the metasemantic problem is serious. If one’s theory is that “X” means X if A, B, and C, then that would seem to provide an answer to the metasemantic question. Why does “X” mean anything at all? Because “X” meets sufficiency conditions A, B, and C. In other words, insofar as there is a question to be answered, it appears that all theories provide one.

Although philosophers and cognitive scientists frequently propose to dispense with (one or another sort of) mental representation (cf., e.g., Stich, 1983, Brooks, 1991, van Gelder, 1995, Haugeland, 1999, Johnson, 2007, Chemero, 2009), this is universally accepted to be a revolutionary shift in thinking about minds. Short of taking on board such radical views, one will naturally want some explanation of how mental representations arise. In attempting such explanations, causal theories have been widely perceived to have numerous attractive features. If, for example, one use for mental representations is to help one keep track of events in the world, then some causal connection between mind and world makes sense. This attractiveness has been enough to motivate new causal theories (e.g. Rupert, 1999, Usher, 2001, and Ryder, 2004), despite the widespread recognition of serious challenges to an earlier generation of theories developed by Stampe, Dretske, Fodor, and others.

  • Adams, F., 1979, “A Goal-State Theory of Function Attribution,” Canadian Journal of Philosophy , 9: 493–518.
  • –––, 2003a, “Thoughts and their Contents: Naturalized Semantics,” in S. Stich and T. Warfield (eds.), The Blackwell Guide to Philosophy of Mind , Oxford: Basil Blackwell, pp. 143–171.
  • –––, 2003b, “The Informational Turn in Philosophy,” Minds and Machines , 13: 471–501.
  • Adams, F. and Aizawa, K., 1992, “‘X’ Means X: Semantics Fodor-Style,” Minds and Machines , 2: 175–183.
  • –––, 1994a, “Fodorian Semantics,” in S. Stich and T. Warfield (eds.), Mental Representations , Oxford: Basil Blackwell, pp. 223–242.
  • –––, 1994b, “‘X’ Means X: Fodor/Warfield Semantics,” Minds and Machines , 4: 215–231.
  • Adams, F., Drebushenko, D., Fuller, G., and Stecker, R., 1990, “Narrow Content: Fodor’s Folly,” Mind & Language , 5: 213–229.
  • Adams, F. and Dietrich, L., 2004, “What’s in a(n Empty) Name?,” Pacific Philosophical Quarterly , 85: 125–148.
  • Adams, F. and Enc, B., 1988, “Not Quite by Accident,” Dialogue , 27: 287–297.
  • Adams, F. and Stecker, R., 1994, “Vacuous Singular Terms,” Mind & Language , 71: 1–12.
  • Agar, N., 1993. “ What do frogs really believe?,” Australasian Journal of Philosophy , 9: 387–401.
  • Aizawa, K., 1994, “Lloyd’s Dialectical Theory of Representation,” Mind & Language , 9: 1–24.
  • Antony, L. and Levine, J., 1991, “The Nomic and the Robust,” in B. Loewer and G. Rey (eds.), Meaning in Mind: Fodor and His Critics , Oxford: Basil Blackwell, pp. 1–16.
  • Artiga, M., & Sebastián, M. A., 2020, “Informational Theories of Content and Mental Representation,” Review of Philosophy and Psychology , 11: 613–27.
  • Baker, L., 1989, “On a Causal Theory of Content,” Philosophical Perspectives , 3: 165–186.
  • –––, 1991, “Has Content Been Naturalized?,” in B. Loewer and G. Rey (eds.), Meaning in Mind: Fodor and His Critics , Oxford: Basil Blackwell, pp. 17–32.
  • Bar-On, D., 1995, “‘Meaning’ Reconstructed: Grice and the Naturalizing of Semantics,” Pacific Philosophical Quarterly , 76: 83–116.
  • Boghossian, P., 1991, “Naturalizing Content,” in B. Loewer and G. Rey (eds.), Meaning in Mind: Fodor and His Critics , Oxford: Basil Blackwell, pp. 65–86.
  • Bridges, J., 2006, “Does Informational Semantics Commit Euthyphro’s Fallacy?,” Noûs , 40: 522–547.
  • Brooks, R., 1991, “Intelligence without Representation,” Artificial Intelligence , 47: 139–159.
  • Buras, T., 2009, “An Argument against Causal Theories of Mental Content,” American Philosophical Quarterly , 46: 117–129.
  • Cain, M. J., 2009, “Fodor’s Attempt to Naturalize Mental Content,” The Philosophical Quarterly , 49: 520–526.
  • Chemero, A., 2009, Radical Embodied Cognitive Science , Cambridge, MA: The MIT Press.
  • Cummins, R., 1989, Meaning and Mental Representation , Cambridge, MA: MIT/Bradford.
  • –––, 1997, “The LOT of the Causal Theory of Mental Content,” Journal of Philosophy , 94: 535–542.
  • Dennett, D., 1987, “Review of J. Fodor’s Psychosemantics ,” Journal of Philosophy , 85: 384–389.
  • Dretske, F., 1981, Knowledge and the Flow of Information , Cambridge, MA: MIT/Bradford Press.
  • –––, 1983, “Precis of Knowledge and the Flow of Information ,” Behavioral and Brain Sciences , 6: 55–63.
  • –––, 1986, “Misrepresentation,” in R. Bogdan (ed.), Belief , Oxford: Oxford University Press, pp. 17–36.
  • –––, 1988, Explaining Behavior: Reasons in a World of Causes , Cambridge, MA: MIT/Bradford.
  • –––, 1999, Naturalizing the Mind , Cambridge, MA: MIT Press.
  • Enç, B., 1982, “Intentional States of Mechanical Devices,” Mind , 91: 161–182.
  • Enç, B. and Adams, F., 1998, “Functions and Goal-Directedness,” in C. Allen, M. Bekoff and G. Lauder (eds.), Nature’s Purposes , Cambridge, MA: MIT/Bradford, pp. 371–394.
  • Fodor, J., 1984, “Semantics, Wisconsin Style,” Synthese , 59: 231–250. (Reprinted in Fodor, 1990a).
  • –––, 1987, Psychosemantics: The Problem of Meaning in the Philosophy of Mind , Cambridge, MA: MIT/Bradford.
  • –––, 1990a, A Theory of Content and Other Essays , Cambridge, MA: MIT/Bradford Press.
  • –––, 1990b, “Information and Representation,” in P. Hanson (ed.), Information, Language, and Cognition , Vancouver: University of British Columbia Press, pp. 175–190.
  • –––, 1990c, “Psychosemantics or Where do Truth Conditions come from?,” in W. Lycan (ed.), Mind and Cognition , Oxford: Basil Blackwell, pp. 312–337.
  • –––, 1991, “Replies,” in B. Loewer and G. Rey (eds.), Meaning in Mind: Fodor and His Critics , Oxford: Basil Blackwell, pp. 255–319.
  • –––, 1994, The Elm and the Expert , Cambridge, MA: MIT/Bradford.
  • –––, 1998a, Concepts: Where Cognitive Science Went Wrong , Oxford: Oxford University Press.
  • –––, 1998b, In Critical Condition: Polemical Essays on Cognitive Science and the Philosophy of Mind , Cambridge, MA: MIT/Bradford Press.
  • Gibson, M., 1996, “Asymmetric Dependencies, Ideal Conditions, and Meaning,” Philosophical Psychology , 9: 235–259.
  • Godfrey-Smith, P., 1989, “Misinformation,” Canadian Journal of Philosophy , 19: 533–550.
  • –––, 1992, “Indication and Adaptation,” Synthese , 92: 283–312.
  • Grice, H., 1957, “Meaning,” The Philosophical Review , 66: 377–88.
  • –––, 1989, Studies in the Way of Words , Cambridge: Harvard University Press.
  • Haugeland, J., 1999, “Mind Embodied and Embedded,” in J. Haugeland (ed.), Having Thought , pp. 207–237.
  • Horgan, T., and Tienson, J., 2002, “The Intentionality of Phenomenology and the Phenomenology of Intentionality,” in D. Chalmers, Philosophy of Mind: Classical and Contemporary Readings , Oxford: Oxford University Press, pp. 520–933.
  • Johnson, M., 2007, The Meaning of the Body: Aesthetics of Human Understanding , Chicago, IL: University of Chicago Press.
  • Jones, T., Mulaire, E., and Stich, S., 1991, “Staving off Catastrophe: A Critical Notice of Jerry Fodor’s Psychosemantics,” Mind & Language , 6: 58–82.
  • Lloyd, D., 1987, “Mental Representation from the Bottom up,” Synthese , 70: 23–78.
  • –––, 1989, Simple minds , Cambridge, MA: The MIT Press.
  • Loar, B., 1991, “Can We Explain Intentionality?,” in B. Loewer and G. Rey (eds.), Meaning in Mind: Fodor and His Critics, Oxford: Basil Blackwell, pp. 119–135.
  • Loewer, B., 1987, “From Information to Intentionality,” Synthese , 70: 287–317.
  • Maloney, C., 1990, “Mental Representation,” Philosophy of Science , 57: 445–458.
  • Maloney, J., 1994, “Content: Covariation, Control and Contingency,” Synthese , 100: 241–290.
  • Manfredi, P. and Summerfield, D., 1992, “Robustness without Asymmetry: A Flaw in Fodor’s Theory of Content,” Philosophical Studies , 66: 261–283.
  • McLaughlin, B. P., 1991, “Belief individuation and Dretske on naturalizing content,” in B. P. McLaughlin (ed.), Dretske and His Critics , Oxford: Basil Blackwell, pp. 157–79.
  • –––, 2016, “The Skewed View From Here: Normal Geometrical Misperception,” Philosophical Topics , 44: 231–99.
  • Mendelovici, A., 2013, “Reliable misrepresentation and tracking theories of mental representation,” Philosophical Studies , 165: 421–443.
  • –––, 2016, “Why tracking theories should allow for clean cases of reliable misrepresentation,” Disputatio , 8: 57–92.
  • Millikan, R., 1984, Language, Thought and Other Biological Categories , Cambridge, MA: MIT Press.
  • –––, 1989, “Biosemantics,” Journal of Philosophy , 86: 281–97.
  • –––, 2001, “What Has Natural Information to Do with Intentional Representation?,” in D. M. Walsh (ed.), Naturalism, Evolution and Mind , Cambridge: Cambridge University Press, pp. 105–125.
  • Neander, K., 1995, “Misrepresenting and Malfunctioning,” Philosophical Studies , 79: 109–141.
  • –––, 1996, “Dretske’s Innate Modesty,” Australasian Journal of Philosophy , 74: 258–274.
  • Papineau, D., 1984, “Representation and Explanation,” Philosophy of Science , 51: 550–72.
  • –––, 1998, “Teleosemantics and Indeterminacy,” Australasian Journal of Philosophy , 76: 1–14.
  • Pineda, D., 1998, “Information and Content,” Philosophical Issues , 9: 381–387.
  • Possin, K., 1988, “Sticky Problems with Stampe on Representations,” Australasian Journal of Philosophy , 66: 75–82.
  • Price, C., 1998, “Determinate functions,” Noûs, 32 : 54–75.
  • Rupert, R., 1999, “The Best Test Theory of Extension: First Principle(s),” Mind & Language , 14: 321–355.
  • –––, 2001, “Coining Terms in the Language of Thought: Innateness, Emergence, and the Lot of Cummins’s Argument against the Causal Theory of Mental Content,” Journal of Philosophy , 98: 499–530.
  • –––, 2008, “Causal Theories of Mental Content,” Philosophy Compass , 3: 353–80.
  • Ryder, D., 2004, “SINBAD Neurosemantics: A Theory of Mental Representation,” Mind & Language , 19: 211–240.
  • Schulte, P., 2012, “How Frogs See the World: Putting Millikan’s Teleosemantics to the Test,” Philosophia , 40: 483–96.
  • –––, 2015, “Perceptual Representations: A Teleosemantic Answer to the Breadth-of-Application Problem,” Biology & Philosophy , 30: 119–36.
  • –––, 2018, “Perceiving the world outside: How to solve the distality problem for informational teleosemantics,” The Philosophical Quarterly , 68: 349–69.
  • Skyrms, B., 2008, “Signals,” Philosophy of Science , 75: 489–500.
  • –––, 2010a, Signals: Evolution, Learning, and Information , Oxford: Oxford University Press
  • –––, 2010b, “The flow of information in signaling games,” Philosophical Studies , 147: 155–65.
  • –––, 2012, “Learning to signal with probe and adjust,” Episteme , 9: 139–50.
  • Stampe, D., 1975, “Show and Tell,” in B. Freed, A. Marras, and P. Maynard (eds.), Forms of Representation , Amsterdam: North-Holland, pp. 221–245.
  • –––, 1977, “Toward a Causal Theory of Linguistic Representation,” in P. French, H. K. Wettstein, and T. E. Uehling (eds.), Midwest Studies in Philosophy , vol. 2, Minneapolis: University of Minnesota Press, pp. 42–63.
  • –––, 1986, “Verification and a Causal Account of Meaning,” Synthese , 69: 107–137.
  • –––, 1990, “Content, Context, and Explanation,” in E. Villanueva, Information, Semantics, and Epistemology , Oxford: Blackwell, pp. 134–152.
  • Stegmann, U. E., 2005, “John Maynard Smith’s notion of animal signals,” Biology and Philosophy , 20: 1011–25.
  • –––, 2009, “A consumer-based teleosemantics for animal signals,” Philosophy of Science , 76: 864–75.
  • Sterelny, K., 1990, The Representational Theory of Mind , Oxford: Blackwell.
  • Stich, S., 1983, From Folk Psychology to Cognitive Science , Cambridge, MA: The MIT Press.
  • Sturdee, D., 1997, “The Semantic Shuffle: Shifting Emphasis in Dretske’s Account of Representational Content,” Erkenntnis, 47 : 89–103.
  • Tye, M., 1995, Ten Problems of Consciousness: A Representational Theory of Mind , Cambridge, MA: MIT Press.
  • Usher, M., 2001, “A Statistical Referential Theory of Content: Using Information Theory to Account for Misrepresentation,” Mind and Language , 16: 311–334.
  • –––, 2004, “Comment on Ryder’s SINBAD Neurosemantics: Is Teleofunction Isomorphism the Way to Understand Representations?,” Mind and Language , 19: 241–248.
  • Van Gelder, T. 1995, “What Might Cognition Be, If not Computation?,” The Journal of Philosophy , 91: 345–381.
  • Wallis, C., 1994, “Representation and the Imperfect Ideal,” Philosophy of Science , 61: 407–428.
  • –––, 1995, “Asymmetrical Dependence, Representation, and Cognitive Science,” The Southern Journal of Philosophy , 33: 373–401.
  • Warfield, T., 1994, “Fodorian Semantics: A Reply to Adams and Aizawa,” Minds and Machines , 4: 205–214.
  • Wright, L., 1973, “Functions,” Philosophical Review , 82: 139–168.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.

[Please contact the authors with suggestions.]

consciousness: representational theories of | externalism about the mind | intentionality | language of thought hypothesis | meaning, theories of | mental content: narrow | mental content: nonconceptual | mental content: teleological theories of

Copyright © 2021 by Fred Adams Ken Aizawa

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

  • Search Menu
  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Urban Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Literature
  • Classical Reception
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Greek and Roman Papyrology
  • Greek and Roman Archaeology
  • Late Antiquity
  • Religion in the Ancient World
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Emotions
  • History of Agriculture
  • History of Education
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Acquisition
  • Language Evolution
  • Language Reference
  • Language Variation
  • Language Families
  • Lexicography
  • Linguistic Anthropology
  • Linguistic Theories
  • Linguistic Typology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies (Modernism)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Religion
  • Music and Media
  • Music and Culture
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Science
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Lifestyle, Home, and Garden
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Politics
  • Law and Society
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Neuroanaesthesia
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Toxicology
  • Medical Oncology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Clinical Neuroscience
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Medical Ethics
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Security
  • Computer Games
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Psychology
  • Cognitive Neuroscience
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business Strategy
  • Business Ethics
  • Business History
  • Business and Government
  • Business and Technology
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic Systems
  • Economic History
  • Economic Methodology
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Natural Disasters (Environment)
  • Social Impact of Environmental Issues (Social Science)
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • International Political Economy
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Political Theory
  • Politics and Law
  • Public Administration
  • Public Policy
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Developmental and Physical Disabilities Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

The Oxford Handbook of Causal Reasoning

  • < Previous chapter
  • Next chapter >

8 Causal Mechanisms

Department of Psychology Yale University New Haven, Connecticut, USA

  • Published: 10 May 2017
  • Cite Icon Cite
  • Permissions Icon Permissions

This chapter reviews empirical and theoretical results concerning knowledge of causal mechanisms—beliefs about how and why events are causally linked. First, it reviews the effects of mechanism knowledge, showing that mechanism knowledge can override other cues to causality (including covariation evidence and temporal cues) and structural constraints (the Markov condition), and that mechanisms play a key role in various forms of inductive inference. Second, it examines several theories of how mechanisms are mentally represented—as associations, forces or powers, icons, abstract placeholders, networks, or schemas—and the empirical evidence bearing on each theory. Finally, it describes ways that people acquire mechanism knowledge, discussing the contributions from statistical induction, testimony, reasoning, and perception. For each of these topics, it highlights key open questions for future research.

Introduction

Our causal knowledge not only includes beliefs about which events are caused by other events, but also an understanding of how and why those events are related. For instance, when a soprano hits an extremely high note, the sound can break a wine glass due to the high frequency of the sound waves. Although people may not know the detailed mechanisms underlying this relationship ( Rozenblit & Keil, 2002 ), people believe that some mechanism transmits a force from the cause to the effect ( White, 1989 ). Likewise, people believe in causal mechanisms underlying interpersonal relations (see Hilton, Chapter 32 in this volume). When Romeo calls to the balcony, Juliet comes, and she does so because of her love. When Claudius murders the king, Hamlet seeks revenge, because Hamlet is filled with rage. We use mechanisms to reason about topics as grand as science ( Koslowski, 1996 ) and morality ( Cushman, 2008 ; see Lagnado & Gerstenberg, Chapter 29 in this volume); and domains as diverse as collision events (Gerstenberg & Tenenbaum, Chapter 27 in this volume; White, Chapter 14 in this volume) and psychopathology (Ahn, Kim, & Lebowitz, Chapter 30 in this volume). Causal mechanisms pervade our cognition through and through.

Indeed, when a person tries to determine the cause of an event, understanding the underlying causal mechanism appears to be the primary concern. For instance, when attempting to identify the cause of “John had an accident on Route 7 yesterday,” participants in Ahn, Kalish, Medin, and Gelman (1995) usually asked questions aimed at testing possible mechanisms (e.g., “Was John drunk?” or “Was there a mechanical problem with the car?”) rather than which factor was responsible for the effect (e.g., “Was there something special about John?” or “Did other people also have a traffic accident last night?”).

In this chapter, we describe the state of current research on mechanism knowledge. After defining terms, we review the effects of mechanism knowledge. We summarize studies showing (1) that mechanism knowledge can override other important cues to causality, and (2) that mechanism knowledge is critical for inductive inference. Next, we examine how mechanisms might be mentally represented, and summarize the empirical evidence bearing on each of several approaches. We then turn to how mechanisms are learned, parsing the contributions from statistical induction, testimony, reasoning, and perception. For each of these broad topics, we discuss potential avenues of future research.

What Is a Causal Mechanism?

A causal mechanism is generally defined as a (1) system of physical parts or abstract variables that (2) causally interact in systematically predictable ways so that their operation can be generalized to new situations (e.g., Glennan, 1996 ; Machamer, Darden, & Craver, 2000 ). We use the term mechanism knowledge to refer to a mental representation of such a system.

Mechanism knowledge is critical in cognition because we use it to understand other causal relations ( Ahn & Kalish, 2000 ). Thus, we are motivated to seek out the mechanisms that underlie a causal relationship. The mechanism underlying the relation “ X caused Y ” (e.g., a soprano’s singing caused a wine glass to break) will involve constructs other than X and Y (e.g., high frequency of the voice), but which can connect those events together. For this reason, mechanisms have a close relationship to explanations ( Lombrozo, 2010 ; Lombrozo & Vasilyeva, Chapter 22 in this volume). For instance, the causal relation “Mary was talking on her cell phone and crashed into a truck” can be explained through its underlying mechanism, “Mary was distracted and didn’t see the red light.” However, because causal knowledge is organized hierarchically ( Johnson & Keil, 2014 ; Simon, 1996 ), this entire causal system could be embedded into a larger system such that more specific events might act as mechanisms underlying more general events. That is, “Mary was talking on her cell phone and crashed into a truck” might be a mechanism underlying “Mary’s driving caused a traffic accident,” which in turn might be a mechanism underlying “Mary caused delays on I-95,” and so on. Thus, mechanism knowledge is not merely a belief about what caused some event, but a belief about how or why that event was brought about by its cause, which can itself be explained in terms of another underlying mechanism, ad infinitum. Although we adopt this understanding of mechanism as a working definition, other factors, such as the organization of memory, appear to play a role in how mechanism knowledge is used and in what counts as a mechanism ( Johnson & Ahn, 2015 ). We discuss some of these factors later in this chapter (see the section “Representing Causal Mechanisms” ).

The term “mechanism” has also been used in several other ways in the literature, which are somewhat different from our use. First, the term “mechanistic explanation” is used to refer to backward-looking explanations (e.g., the knife is sharp because Mark filed it), as opposed to forward-looking, teleological explanations (the knife is sharp because it is for cutting; Lombrozo, 2010 ). However, this distinction does not map onto our sense of mechanism, because teleological explanations can often be recast in mechanistic terms, in terms of causally interacting variables (e.g., the knife is sharp because human agents wanted to fashion a sharp object, and forging a sharp piece of metal was the best way to accomplish this goal; Lombrozo & Carey, 2006 ).

Second, some have argued that our knowledge of mechanisms underlying two causally related events, say A and B , includes not only the belief that there is a system of causally related variables mediating the relationship between A and B (a “mechanism,” as defined in the current chapter), but also an assumption that a force or causal power is transmitted from A to B ( Ahn & Kalish, 2000 ; White, 1989 ). This is an independent issue because knowledge about a system of causally interconnected parts does not have to involve the notion of causal power or force. In fact, many of studies reviewed in this chapter demonstrating effects of mechanism knowledge did not test whether the assumptions of causal force are required to obtain such effects. In this chapter, we separate these two issues when defining mechanism knowledge. Thus, our discussion of the effects of mechanism knowledge does not take a position on the debate concerning causal force, and our discussions of how people represent and learn mechanisms do not beg the question against statistical theories.

Using Mechanism Knowledge

A major purpose of high-level cognition is inductive inference —predicting the unknown from the known. Here, we argue that mechanism knowledge plays a critical role in people’s inductive capacities. We describe studies on how mechanism knowledge is used in a variety of inductive tasks, including causal inference, category formation, category-based induction, and probability judgment.

Mechanisms and Causal Inference

David Hume (1748/1977) identified two cues as critical to identifying causal relationships—covariation (the cause and effect occurring on the same occasions more often than would be expected by chance) and temporal contiguity (the cause and effect occurring close together in time). Both of these factors have received considerable empirical attention in recent years, and it has become increasingly clear that neither of these cues acts alone, but rather in conjunction with prior knowledge of causal mechanisms. In this section, we first describe how mechanism knowledge influences the interpretation of covariation information. We then describe how mechanism knowledge can result in violations of the causal Markov condition, a key assumption to modern Bayesian approaches to causal inference. Finally, we review evidence that even the seemingly straightforward cue of temporal contiguity is influenced in a top-down manner by mechanism knowledge.

Covariation

Scientists must test their hypotheses using statistical inference. To know whether a medical treatment really works, or a genetic mutation really has a certain effect, or a psychological principle really applies, one must test whether the cause and effect are statistically associated. This observation leads to the plausible conjecture that laypeople’s everyday causal reasoning also depends on an ability to test for covariation between cause and effect.

But consider the following (real) research finding from medical science ( Focht, Spicer, & Fairchok, 2002 ): placing duct tape over a wart made it disappear in 85% of the cases (compared to 60% of cases receiving more traditional cryotherapy). Despite the study’s experimental manipulation and statistically significant effect, people may still be doubtful that duct tape can remove warts because they cannot think of a plausible mechanism underlying the causal relationship. In fact, the researchers supplied a mechanism: the duct tape irritates the skin, which in turn stimulates an immune system response, which in turn wipes out the viral infection that had caused the wart in the first place. Given this mechanism information, people would be far likelier to believe this causal link. Thus, even statistically compelling covariation obtained through experimental manipulation may not be taken as evidence for a causal link in the absence of a plausible underlying mechanism.

However, in this example, it could be that the mechanism is supplying “covert” covariation information—for example, the mechanism implies covariation between duct tape and irritation, irritation and immune response, and immune response and wart recovery, and could have thereby conveyed stronger covariation between duct tape and wart recovery. In that case, one might argue that there is nothing special about mechanism information other than conveying covariation. To empirically demonstrate that mechanism information bolsters causal inferences above and beyond the covariation implied by the mechanism, Ahn et al. (1995 , Experiment 4) asked a group of participants to rate the strength of the covariation implied by sentences like “John does not know how to drive” for “John had a traffic accident.” They then asked a new group of participants to make causal attributions for the effect (e.g., the accident), given either the mechanism (e.g., John does not know how to drive) or its equivalent covariation (e.g., John is much more likely to have a traffic accident than other people are), as rated by the first group of participants. Participants were much more inclined to attribute the accident to the target cause when given the underlying mechanism, showing that mechanism information has an effect that goes beyond covariation.

More generally, the interpretation of covariation data is strongly influenced by mechanism knowledge. For example, learning about a covariation between a cause and effect has a stronger effect on the judged probability of a causal relationship when there is a plausible mechanism underlying the cause and effect (e.g., severed brake lines and a car accident) than when there is not (e.g., a flat tire and a car failing to start; Fugelsang & Thompson, 2000 ). Similarly, both scientists and laypeople are more likely to discount data inconsistent with an existing causal theory, relative to data consistent with the theory ( Fugelsang, Stein, Green, & Dunbar, 2004 ). Finally, people are more likely to condition on a potential alternative cause when interpreting trial-by-trial contingency data, if they are told about the mechanism by which the alternative cause operates ( Spellman, Price, & Logan, 2001 ). These effects show that not only does mechanism information do something beyond covariation, but that it even constrains the way that covariation is used.

Structural Constraints

Patterns of covariation between variables can be combined into larger patterns of causal dependency, represented as Bayesian networks ( Pearl, 2000 ; Rottman & Hastie, 2014 ; Rottman, Chapter 6 in this volume). For example, if a covariation is known to exist between smoking cigarettes ( A ) and impairment of lung function ( B ), and another is known to exist between smoking cigarettes ( A ) and financial burden ( C ), this can be represented as a causal network with an arrow from A to B and an arrow from A to C (a common cause structure). But of course, all of these events also have causes and consequences—social pressure causes cigarette smoking, impairment of lung function causes less frequent exercise, financial burden causes marital stress, and so on, ad infinitum. If we had to take into account all of these variables to make predictions about any of them (say, B ), then we would never be able to use causal knowledge to do anything. The world is replete with too much information for cognition without constraints.

The key computational constraint posited by Bayesian network theories of causation is the causal Markov condition (also known as “screening off”; Pearl, 2000 ; Spirtes, Glymour, & Scheines, 1993 ). This assumption allows the reasoner to ignore the vast majority of potential variables—to assume that the probability distribution of a given variable is independent of all other variables except its direct effects, conditional on its causes. For example, the Markov condition tells us, given the causal structure described previously for smoking, that if we know that Lisa smokes ( A ), knowing about her lung function ( B ) doesn’t tell us anything about her potential financial burden ( C ), and vice versa. Because the Markov condition is what allows reasoners to ignore irrelevant variables (here, we can predict B without knowing about C or any of the causes of A ), it is crucial for inference on Bayesian networks.

Alas, people often violate the Markov condition. Although there appear to be a number of factors at play in these violations, including essentialist ( Rehder & Burnett, 2005 ) and associationist ( Rehder, 2014 ) thinking, one critical factor is mechanism knowledge (Park & Sloman, 2013 , 2014 ). In common cause structures such as the preceding smoking example (smoking leading to lung impairment and financial burden) where each causal link relies on a different mechanism, people do tend to obey the Markov condition. That is, when asked to judge the probability of lung impairment given that a person smokes, this judgment is the same as when asked to judge the probability of lung impairment given that a person smokes and has a financial burden. But when the links rely on the same mechanism (e.g., smoking leading to lung impairment and to blood vessel damage), people robustly violate the Markov condition. When asked to judge the probability of lung impairment given that a person smokes, this judgment is lower than when asked to judge the probability of lung impairment given that a person smokes and has blood vessel damage.

This effect is thought to occur because participants use mechanism information to elaborate on the causal structure, interpolating the underlying mechanism into the causal graph ( Park & Sloman, 2013 ). So, when the link between A and B depends on a different mechanism than the link between A and C , the resulting structure would involve two branches emanating from A , namely A → M 1 → B and A → M 2 → C . In Lisa’s case, cellular damage might be the mechanism mediating smoking and lung impairment, but cigarette expenditures would be the mechanism mediating smoking and financial burden. Thus, knowing about C (Lisa’s financial burden) triggers an inference about M 2 (cigarette expenditures), but this knowledge has no effect on B (lung impairment) given that A (smoking) is known—the Markov condition is respected. But when the link between A and B depends on the same mechanism as the link between A and C , the resulting structure would be a link from A to M 1 , and then from M 1 to B and to C —so, in effect, the mechanism M 1 is the common cause, rather than A . That is, cellular damage might be the mechanism mediating the relationship between smoking and lung impairment and the relationship between smoking and blood vessel damage. Thus, knowing about C (blood vessel damage) triggers an inference about M 1 (cellular damage), and this knowledge has an effect on B (lung impairment) even if A (smoking) is known. Mechanism knowledge therefore not only affects the interpretation of covariation information, but also the very computational principles used to make inferences over systems of variables.

Temporal Cues

According to the principle of temporal contiguity , two events are more likely to be causally connected if they occur close together in time. This idea has considerable empirical support (e.g., Lagnado & Sloman, 2006 ; Michotte, 1946/1963 ), and at least in some contexts, temporal contiguity appears to be used more readily than covariation in learning causal relations ( Rottman & Keil, 2012 ; White, 2006 ). The use of temporal contiguity was long taken as a triumph for associationist theories of causal inference ( Shanks, Pearson, & Dickinson, 1989 ), because longer temporal delays are associated with weaker associations in associationist learning models.

Yet, people’s use of temporal cues appears to be more nuanced. People are able to associate causes and effects that are very distant in time ( Einhorn & Hogarth, 1986 ). For example, a long temporal gap intervenes between sex and birth, between smoking and cancer, between work and paycheck, and between murder and prison. Why is it that the long temporal gaps between these events do not prevent us from noticing these causal links?

A series of papers by Buehner and colleagues documented top-down influences of causal knowledge on the use temporal contiguity (see Buehner, Chapter 28 in this volume). When participants expect a delay between cause and effect, longer delays have a markedly smaller deleterious effect on causal inference (Buehner & May, 2002 , 2003 ), suggesting some knowledge mediation. In fact, when temporal delay is de-confounded with contingency, the effect of temporal delay can be eliminated altogether by instructions that induce the expectation of delay ( Buehner & May, 2004 ). Most dramatically, some experiments used unseen physical causal mechanisms, which participants would believe to take a relatively short time to operate (a ball rolling down a steep ramp, hidden from view) or a long time to operate (a ball rolling down a shallow ramp). Under such circumstances, causal judgments were facilitated by longer delays between cause and effect, when the mechanism was one that would take a relatively long time to operate ( Buehner & McGregor, 2006 ). Although older (9- to 10-year-old) children can integrate such mechanism cues with temporal information, younger (4- to 8-year-old) children continued to be swayed by temporal contiguity, suggesting that the relative priority of causal cues undergoes development ( Schlottmann, 1999 ). Thus, when people can apply a mechanism to a putative causal relationship, they adjust their expectations about temporal delay so as to fit their knowledge of that mechanism.

Mechanisms and Induction

The raison d’être for high-level cognition in general, and for causal inference in particular, is to infer the unknown from the known—to make predictions that will usefully serve the organism through inductive inference ( Murphy, 2002 ; Rehder, Chapters 20 and 21 in this volume). In this section, we give several examples of ways that mechanism knowledge is critical to inductive inference.

Categories are a prototypical cognitive structure that exists to support inductive inference. We group together entities with similar known properties, because those entities are likely to also share similar unknown properties ( Murphy, 2002 ). Mechanism knowledge influences which categories we use. In a study by Hagmayer, Meder, von Sydow, and Waldmann (2011) , participants learned the contingency between molecules and cell death. Molecules varied in size (large or small) and color (white or gray). While large white (11) molecules always led to cell death and small gray (00) molecules never did, small white (01) and large gray (10) ones led to cell death 50% of the time. That is, 01 and 10 were equally predictive of cell death. However, prior to this contingency learning, some participants learned that molecule color was caused by a genetic mutation. Participants used this prior causal history to categorize small white molecules (01) with large white (11) molecules, which always resulted in cell death. Consequently, these participants judged that small white molecules (01) were much more likely to result in cell death than large gray molecules (10), even though they observed both probabilities to be 50%. The opposite pattern was obtained when participants learned that genetic mutation caused molecules to be large.

Critically, this effect of prior categorization on subsequent causal learning depended on the type of underlying mechanism. Note that most people would agree that genetic mutations affect deeper features of molecules, which not only modify surface features such as color of molecules, but also can affect the likelihood of cell death. Thus, the initial category learning based on the cover story involving genetic mutations provided a mechanism, which could affect later causal judgments involving cell death. In a subsequent experiment, however, the cover story used for category learning provided an incoherent mechanism. Participants learned that the variations in color (or size) were due to atmospheric pressure, which would be viewed as affecting only the surface features. Despite identical learning situations, participants provided with mechanism information that were relevant only to surface features did not distinguish between 10 and 01 in their causal judgments; their judgments stayed close to 50%. Thus, Hagmayer et al. (2011) showed that prior learning of categorization affects subsequent causal judgments only when the categorization involves mechanisms that would be relevant to the content of the causal judgments (see also Waldmann & Hagmayer, 2006 , for related results).

More generally, people are likely to induce and use categories that are coherent ( Murphy & Allopenna, 1994 ; Rehder & Hastie, 2004 ; Rehder & Ross, 2001 ). A category is coherent to the extent that its features “go together,” given the reasoner’s prior causal theories ( Murphy & Medin, 1985 ). For example, “lives in water, eats fish, has many offspring, and is small” is a coherent category, because one can think of a causal mechanism that unifies these features, supplying the necessary mechanism knowledge; in contrast, “lives in water, eats wheat, has a flat end, and is used for stabbing bugs” is an incoherent category because it is difficult to supply mechanisms that could unify these features in a single causal theory ( Murphy & Wisniewski, 1989 ). Categories based on a coherent mechanism are easier to learn ( Murphy, 2002 ), are more likely to support the extension of properties to new members ( Rehder & Hastie, 2004 ), and require fewer members possessing a given property to do so ( Patalano & Ross, 2007 ).

Mechanism knowledge also influences category-based induction , or the likelihood of extending features from one category to another (see Heit, 2000 , for a review). If the mechanism explaining why the premise category has a property is the same as the mechanism explaining why the conclusion category might have the property, then participants tend to rate the conclusion category as very likely having that property ( Sloman, 1994 ). For example, participants found the following argument highly convincing:

Hyundais have tariffs applied to them; therefore, Porsches have tariffs applied to them.

That is, the reason that Hyundais have tariffs applied to them is because they are foreign cars, which would also explain why Porsches have tariffs applied to them. So, the premise in this case strongly supports the conclusion. In contrast, one may discount the likelihood of a conclusion when the premise and conclusion rely on different mechanisms, such as:

Hyundais are usually purchased by people 25 years old and younger; therefore, Porsches are usually purchased by people 25 years old and younger.

In this case, the reason that Hyundais are purchased by young people (that Hyundais are inexpensive and young people do not have good credit) does not apply to Porsches (which might be purchased by young people because young people like fast cars). Because the premise introduces an alternative explanation for the property, people tend to rate the probability of the conclusion about Porsches lower when the premise about Hyundais is given, compared to when it is not given—an instance of the discounting or explaining-away effect ( Kelley, 1973 ). These results show that mechanism knowledge can moderate the likelihood of accepting an explanation in the presence of another explanation.

Ahn and Bailenson (1996) further examined the role of mechanism knowledge in the discounting and conjunction effects. In the discounting effect ( Kelley, 1973 ), people rate the probability P( B ) of one explanation higher than its conditional probability given another competing explanation, P( B | A ). In the conjunction effect ( Tversky & Kahneman, 1983 ), people rate the probability of a conjunctive explanation, P( A & B ), higher than its individual constituents such as P( A ). The two effects may appear contradictory because the discounting effect seems to imply that one explanation is better than two, whereas the conjunction effect seems to imply that two explanations are better than one. Yet, Ahn and Bailenson (1996) showed that both phenomena turn on mechanism-based reasoning, and can occur simultaneously with identical events. For example, consider the task of explaining why Kim had a traffic accident. Further suppose that a reasoner learns that Kim is nearsighted. Given this explanation, a reasoner can imagine Kim having a traffic accident due to her nearsightedness. Note that to accept this explanation, one has to imagine that Kim’s nearsightedness is severe enough to cause a traffic accident even under normal circumstances. Once such a mechanism is established, another explanation, “there was a severe storm,” would be seen as less likely because Kim’s nearsightedness is already a sufficient cause for a traffic accident. Thus, the second cause would be discounted. However, consider a different situation where both explanations are presented as being tentative and are to be evaluated simultaneously. Thus, one is to judge the likelihood that Kim had a traffic accident because she is nearsighted and there was a severe storm. In this case, a reasoner can portray a slightly different, yet coherent mechanism where Kim’s (somewhat) poor vision, coupled with poor visibility caused by a storm, would have led to a traffic accident. Due to this coherent mechanism, the reasoner would be willing to accept the conjunctive explanation as highly likely—even as more likely than either of its conjuncts individually. That is, the discounting effect occurs because a reasoner settles in on a mechanism that excludes a second explanation, whereas the conjunction effect occurs because a reasoner can construct a coherent mechanism that can incorporate both explanations.

In addition to demonstrating simultaneous conjunction and discounting effects, Ahn and Bailenson (1996) further showed that these effects do not occur when explanations are purely covariation-based—that is, when the explanations indicate positive covariation between a potential cause and effect without suggesting any underlying mechanism mediating their relationship. For instance, the explanations “Kim is more likely to have traffic accidents than other people are” and “traffic accidents were more likely to occur last night than on other nights” resulted in neither conjunction nor discounting effects. This pattern of results indicates that both discounting and conjunction effects are species of mechanism-based reasoning.

Open Questions

These studies demonstrate a variety of ways that mechanism knowledge pervades our inductive capacities, but mechanism knowledge could affect induction in yet other ways. Beyond covariation, structural constraints, and temporal cues, might other cues to causality be affected by the nature of the underlying mechanisms? For instance, might the results of interventions be interpreted differently given different mechanisms? Might mechanism knowledge modulate the relative importance of these various cues to causality?

There are also open questions about how mechanisms are used in induction. Given the tight link between mechanisms and explanation, what role might mechanisms play in inference to the best explanation, or abductive inference ( Lipton, 2004 ; Lombrozo, 2012 )? To what extent do different sorts of inductive problems ( Kemp & Jern, 2013 ) lend themselves more to mechanism-based versus probability-based causal reasoning (see also Lombrozo, 2010 )? Are there individual differences in the use of mechanisms? For instance, given that mechanisms underlie surface events, could people who are more intolerant of ambiguity or more in need of cognitive closure be more motivated to seek them out? Could people who are high in creativity be more capable of generating them, and more affected by them as a result? Finally, although we could in principle keep on asking “why” questions perpetually, we eventually settle for a given level of detail as adequate. What determines this optimal level of mechanistic explanation?

Representing Causal Mechanisms

In the previous section, we described several of the cognitive processes that use mechanism knowledge. Here, we ask how mechanism knowledge is mentally represented ( Markman, 1999 ). That is, what information do we store about mechanisms, and how do different mechanisms relate to one another in memory? We consider six possible representational formats—associations, forces or powers, icons, abstract placeholders, networks, and schemas.

Associations

According to associationist theories of causality, learning about causal relationships is equivalent to learning associations between causes and effects, using domain-general learning mechanisms that are evolutionarily ancient and used in other areas of causation ( Shanks, 1987 ; Le Pelley, Griffiths, & Beesley, Chapter 2 in this volume). Thus, causal relations (including mechanism knowledge) would be represented as an association between two classes of events, akin to the stored result of a statistical significance test, so that one event would lead to the expectation of the other. This view is theoretically economical, in that associative learning is well established and well understood in other domains and in animal models. Further, associative learning can explain many effects in trial-by-trial causal learning experiments, including effects of contingency ( Shanks, 1987 ) and delay ( Shanks, Pearson, & Dickinson, 1989 ).

However, hard times have fallen on purely associative theories of causation. Because these theories generally do not distinguish between the role of cause and effect, they have difficulty accounting for asymmetries in predictive and diagnostic causal learning ( Waldmann, 2000 ; Waldmann & Holyoak, 1992 ). Further, these theories predict a monotonic decline in associative strength with a delay between cause and effect, yet this decline can be eliminated or even reversed with appropriate mechanism knowledge ( Buehner & May, 2004 ; Buehner & McGregor, 2006 ). Although associative processes are likely to play some role in causal reasoning and learning (e.g., Rehder, 2014 ), causal learning appears to go beyond mere association.

There are also problems with associations as representations of mechanism knowledge. One straightforward way of representing mechanism knowledge using associations is to represent causal relations among sub-parts or intermediate steps between cause and effect using associations. Thus, association between cause and effect would consist of associations between the cause and first intermediate step, the first intermediate step and second intermediate step, and so on, while the overall association between cause and effect remains the same. This approach to mechanisms may be able to account for some effects of mechanism knowledge described earlier. For example, to account for why people believe more strongly in a causal link given a plausible mechanism for observed covariation ( Fugelsang & Thompson, 2000 ), an advocate of associationism can argue that the mechanism conveys additional associative strength.

However, other effects of mechanism knowledge described earlier seem more challenging to the associationist approach. Ahn et al. (1995 ; Experiment 4) equated the covariation or association conveyed by the mechanism statements and the covariation statements, but participants nonetheless gave stronger causal attributions given the mechanism statements than covariation statements. Likewise, it is unclear on the associationist approach why conjunction and discounting effects are not obtained given purely covariational statements ( Ahn & Bailenson, 1996 ), or why mechanism knowledge influences which categories we induce, given identical learning data ( Hagmayer et al., 2011 ).

Forces and Powers

The associationist view contrasts most strongly with accounts of causal mechanisms in terms of forces ( Talmy, 1988 ; Wolff, 2007 ) or powers ( Harré & Madden, 1975 ; White, 1988 , 1989 ). The intuition behind these approaches is that causal relations correspond to the operation of physical laws, acting on physical objects ( Aristotle, 1970 ; Harré & Madden, 1975 ) or through physical processes ( Dowe, 2000 ; Salmon, 1984 ; see also Danks, Chapter 12 in this volume). For example, Dowe (2000) argued that causal relations occur when a conserved quantity, such as energy, is transferred from one entity to another. This idea is broadly consistent with demonstrations that people often identify visual collision events as causal or non-causal in ways concordant with the principles of Newtonian mechanics, such as conservation of momentum ( Michotte, 1946/1963 ). Indeed, even young children seem to be sensitive to physical factors such as transmission in their causal reasoning ( Bullock, Gelman, & Baillargeon, 1982 ; Shultz, Fisher, Pratt, & Rulf, 1986 ).

The force dynamics theory ( Talmy, 1988 ; Wolff, 2007 ; Wolff & Thorstad, Chapter 9 in this volume) fleshes out these intuitions by representing causal relations as combinations of physical forces, modeled as vectors. On this theory, the causal affector (the entity causing the event) and patient (the entity operated on by the agent) are both associated with force vectors, indicating the direction of the physical or metaphorical forces in operation. For example, in a causal interaction between a fan and a toy boat, the fan would be the affector and the toy boat would be the patient, and both entities would have a vector indicating the direction of their motion. These forces, as well as any other forces in the environment, would combine to yield a resultant vector (e.g., the boat hits an orange buoy). On Wolff’s (2007) theory, the affector causes a particular end state to occur if (a) the patient initially does not have a tendency toward that endstate, but (b) the affector changes the patient’s tendency, and (c) the end state is achieved. For instance, the fan caused the boat to hit the buoy because (a) the boat was not initially headed in that direction, but (b) the fan changed the boat’s course, so that (c) the boat hit the buoy. This sort of force analysis has been applied to several phenomena in causal reasoning, including semantic distinctions among causal vocabulary ( cause, enable, prevent, despite ; Wolff, 2007 ); the chaining of causal relations (e.g., A preventing B and B causing C ; Barbey & Wolff, 2007 ); causation by omission ( Wolff, Barbey, & Hausknecht, 2010 ); and direct versus indirect causation ( Wolff, 2003 ).

A related physicalist approach is the causal powers theory ( Harré & Madden, 1975 ; White, 1988 , 1989 ). On this view, people conceptualize particulars (objects or persons) as having dispositional causal properties, which operate under the appropriate releasing conditions . These properties can be either causal powers (capacities to bring about effects) or liabilities (capacities to undergo effects). For example, a hammer might strike a glass watch face, causing it to break ( Einhorn & Hogarth, 1986 ). In this case, the hammer has a power to bring about breaking, and the glass has the liability to be broken. (See White, 2009b , for a review of many studies consistent with the notion that causal relations involve transmission of properties among entities.) People then make causal predictions and inferences based on their knowledge of the causal powers and liabilities of familiar entities.

These physicalist theories capture a variety of intuitions and empirical results concerning causal thinking (see Waldmann & Mayrhofer, 2016 ), and any complete theory of causal mechanisms is responsible for accounting for these phenomena. However, these theories are compatible with many different underlying representations. In the case of force dynamics, the vector representations are highly abstract and apply to any causal situation. That is, this theory does not posit representations for specific mechanisms in semantic memory, and therefore mechanism representations could take one of many formats. In the case of causal powers theory, the reasoner must represent properties of particular objects, which in combination could lead to representations of specific mechanisms. However, these property representations could potentially take several different representational formats, including icons and schemas (see later discussion). Thus, although force and power theories certainly capture important aspects of causal reasoning, they do not provide a clear answer to the question of how mechanisms are mentally represented.

A related possibility is that people represent causal mechanisms in an iconic or image-like format. For example, when using mechanism knowledge to think about how a physical device works, the reasoner might mentally simulate the operation of the machine using mental imagery. More generally, people might store mechanism knowledge in an iconic format isomorphic to the physical system ( Barsalou, 1999 )—a view that sits comfortably with the physicalist theories described earlier. ( Goldvarg and Johnson-Laird, 2001 , propose a different, broadly iconic view of causal thinking based on mental models; see also Johnson-Laird & Khemlani, Chapter 10 in this volume.)

Forbus’s (1984)   qualitative process theory is an artificial intelligence theory of this style of reasoning. Qualitative process theory is designed to solve problems such as whether a bathtub will overflow, given the rate of water flowing out the faucet, the rate of drainage, and the rate of evaporation. This theory is “qualitative” in the sense that it compares quantities and stores the direction of change, but does not reason about exact quantities. In this way, it is supposed to be similar to how humans solve these problems.

However, even if qualitative process theory accurately characterizes human problem-solving processes, it is unclear whether these processes rely on mental representations that are propositional or image-like; after all, qualitative process theory itself is implemented in a computer programming language, using propositional representations. Several experimental results have been taken to support image-like representations (see Hegarty, 2004 , for a review). First, when solving problems about physical causal systems (such as diagrams of pulleys or gears), participants who think aloud are likely to make gestures preceding their verbal descriptions, suggesting that spatial reasoning underlies their verbalizations ( Schwartz & Black, 1996 ). Second, solving problems about physical causal systems (such as diagrams of pulleys or gears) appears to rely on visual ability but not verbal ability. Performance on such problems is predicted by individual differences in spatial ability but not in verbal ability ( Hegarty & Sims, 1994 ), and dual-task studies reveal interference between mechanical reasoning and maintenance of a visual working memory load, but not a verbal working memory load ( Sims & Hegarty, 1997 ).

It is an open question whether people run image-like mental simulations even when reasoning about causal processes that are less akin to physical systems, but some indirect support exists. For instance, asymmetries in cause-to-effect versus effect-to-cause reasoning suggest that people may use simulations. Tversky and Kahneman (1981) showed that people rate the conditional probability of a daughter having blue eyes given that her mother has blue eyes to be higher than the conditional probability of a mother having blue eyes given that the daughter has blue eyes. If the base rates of mothers and daughters having blue eyes are equal, these probabilities should be the same, but people appear to err because they make higher judgments when probability “flows” with the direction of causality (for similar findings, see Fernbach, Darlow, & Sloman, 2010 , 2011 ; Medin, Coley, Storms, & Hayes, 2003 ; Pennington & Hastie, 1988 ). While these results do not necessitate image-like representations, they do speak in favor of simulation processes, as forward simulations appear to be more easily “run” than backward simulations, just as films with a conventional narrative structure are more readily understood than films like Memento in which the plot unfolds in reverse order.

However, other arguments and evidence suggest that these results may be better understood in terms of non-iconic representations. First, a number of researchers have argued that there are fundamental problems with iconic representations. Pylyshyn (1973) argues, for example, that if we store iconic representations and use them in the same way that we use visual perception, then we need a separate representational system to interpret those icons, just as we do for vision. Rips (1984) criticizes mental simulation more generally, pointing out that the sort of mental simulation posited by AI systems in all but the simplest cases is likely to be beyond the cognitive capacity of human reasoners. Reasoning about turning gears is one thing, but Kahneman and Tversky (1982) claim that people use mental simulation to assess the probabilities of enormously complex causal systems, such as geopolitical conflict. Clearly, the number and variety of causal mechanisms at play for such simulations is beyond the ken of even the most sophisticated computer algorithms, much less human agents. In Rips’s view, rule-based mechanisms are far more plausible candidates for physical causal reasoning. According to both Pylyshyn and Rips, then, the phenomenology of mental simulation may be epiphenomenal.

There is also empirical evidence at odds with iconic representations of mechanisms. For example, Hegarty (1992) gave participants diagrams of systems of pulleys, and asked them questions such as “If the rope is pulled, will pulley B turn clockwise or counterclockwise?” Response times were related to the number of components between the cause (here, the rope) and effect (pulley B). While this result is broadly consistent with the idea of mental simulation, it suggests that people simulate the system piecemeal rather than simultaneously (as one might expect for a mental image or “movie”). More problematically, participants seem to be self-inconsistent when all parts are considered. In a study by Rips and Gentner (reported in Rips, 1984 ), participants were told about a closed room containing a pan of water. They were asked about the relations between different physical variables (such as air temperature, evaporation rate, and air pressure)—precisely the sort of inferences that mental simulations (such as those proposed by qualitative process theory) are supposed to be used for. The researchers found that people not only answered these questions inconsistently with the laws of physics, but even made intransitive inferences. That is, participants very frequently claimed that a variable X causes a change in variable Y , which in turn causes a change in variable Z , but that X does not cause a change in Z —an intransitive inference. Such responses should not be possible if people are qualitatively simulating the physical mechanisms at work: even if their mechanism knowledge diverges from the laws of physics, it should at least be internally consistent. ( Johnson and Ahn, 2015 , review several cases where causal intransitivity can be normative, but none of these cases appears to be relevant to the stimuli used in the Rips and Gentner study). These results are more consistent with a schema view of mechanism knowledge (see later in this chapter).

In sum, while studies of physical causal reasoning provide further evidence that causal thinking and mechanism knowledge in particular are used widely across tasks, they do not seem to legislate strongly in favor of iconic representations of mechanism knowledge. These results do, however, provide constraints on what representations could be used for mechanism-based reasoning.

Placeholders

A fourth representational candidate is a placeholder or reference pointer . On this view, people do not have elaborate knowledge about causal mechanisms underlying causal relations, but instead have a placeholder for a causal mechanism. That is, people would believe that every causal relation has an (unknown) causal mechanism, yet in most cases would not explicitly represent the content. (See Keil, 1989 ; Kripke, 1980 ; Medin & Ortony, 1989 ; Putnam, 1975 for the original ideas involving conceptual representations; and see Pearl, 2000 for a related, formal view.)

The strongest evidence for this position comes from metacognitive illusions , in which people consistently overestimate their knowledge about causal systems ( Rozenblit & Keil, 2002 ). In a demonstration of the illusion of explanatory depth (IOED), participants were asked to rate their mechanistic knowledge of how a complex but familiar artifact operates (such as a flush toilet). Participants were then instructed to explain in detail how that artifact operates. When asked to re-rate their mechanistic knowledge afterward, ratings were sharply lower, indicating that the act of explaining brought into awareness the illusory nature of their mechanistic knowledge. Thus, people’s representations of causal mechanisms appear to differ from their meta-representation—people’s representations of mechanisms are highly skeletal and impoverished, yet their meta-representations point to much fuller knowledge.

Further, this illusion goes beyond general overconfidence. Although similar effects can be found in other complex causal domains (e.g., natural phenomena such as how tides occur), people’s knowledge is comparatively well calibrated in non-causal domains, such as facts (e.g., the capital of England), procedures (e.g., how to bake chocolate chip cookies from scratch), and narratives (e.g., the plot of Good Will Hunting ), although some (more modest) overconfidence can be found in these other domains as well ( Fischhoff, Slovic, & Lichtenstein, 1977 ).

Together, these results suggest that, at least in some cases, people do not store detailed representations of mechanisms in their heads, but rather some skeletal details, together with a meta-representational placeholder or “pointer” to some unknown mechanism assumed to exist in the world. These impoverished representations, together with the robust illusions of their richness, are another reason to be suspicious of iconic representations of mechanism knowledge (see earlier subsection, “Icons” ). To the extent that this is a plausible representational format because it feels introspectively right, we should be suspicious that this intuition may be a metacognitive illusion.

However, in addition to these meta-representational pointers or placeholders, people clearly do have some skeletal representations of mechanisms. Many of the effects described in earlier sections depend on people having some understanding of the content of the underlying mechanisms (e.g., Ahn & Bailenson, 1996 ; Ahn et al., 1995 ; Fugelsang & Thompson, 2000 ). And although people’s mechanistic knowledge might be embarrassingly shallow for scientific phenomena and mechanical devices, it seems to be more complete for mundane phenomena. For instance, people often drink water after they exercise. Why? Because they become thirsty. Although the physiological details may elude most people, people surely understand this mechanism at a basic, skeletal level. If not as associations, causal powers, or icons, what format do these representations take? Next, we consider two possibilities for these skeletal representations—causal networks and schemas.

The idea that causal mechanisms might be represented as networks has recently received much attention (e.g., Glymour & Cheng, 1998 ; Griffiths & Tenenbaum, 2009 ; Pearl, 2000 ). According to this view, causal relationships are represented as links between variables in a directed graph, encoding the probabilistic relationships among the variables and the counterfactuals entailed by potential interventions (see Rottman, Chapter 6 in this volume for more details). For example, people know that exercising ( X ) causes a person to become thirsty ( Y ), which in turn causes a person to drink water ( Z ). The causal arrows expressed in the graph encode facts such as: (1) exercising raises the probability that a person becomes thirsty (a probabilistic dependency); and (2) intervening to make a person exercise (or not exercise) will change the probability of thirst (a counterfactual dependency). The relationship between thirst ( Y ) and drinking water ( Z ) can be analyzed in a similar way. These two relationships can lead a reasoner to infer, transitively, a positive covariation between exercise ( X ) and drinking water ( Z ), and a counterfactual dependence between interventions on exercise and the probability of drinking water (but see the following subsection, “Schemas,” for several normative reasons why causal chains can be intransitive). Similarly, the effects of drinking water will also have probabilistic and counterfactual relationships to exercise, as will the alternative causes of drinking water, and so on. These networks are used in artificial intelligence systems because they are economical and efficient ways of storing and reasoning about causal relationships (Pearl, 1988 , 2000 ; Spirtes, Glymour, & Scheines, 1993 ).

If causal knowledge is represented in causal networks, then they could be reducible to the probabilistic dependencies and counterfactual entailments implied by the network. One proponent of this view is Pearl (1988) , who argued that our knowledge is fundamentally about probabilities, and that causal relationships are merely shorthand for probabilistic relationships (though Pearl, 2000 , argues for a different view; see “Open Questions” later in this section). If causal relations are merely abbreviations of probabilistic relationships, we can define a mechanism for the causal relationship X → Z as a variable Y which, when conditioned on, makes the correlation between X and Z go to zero ( Glymour & Cheng, 1998 ) so that the Markov condition is satisfied. That is, Y is a mechanism for X → Z if P( Z | X ) > P( Z |~ X ), but P( Z | X,Y ) = P( Z | ~X,Y ). The intuition here is the same as in mediation analysis in statistics—a variable Y is a full mechanism or mediator if it accounts for the entirety of the relationship between X and Z . As an example, Glymour and Cheng (1998 , p. 295) cite the following case (from Baumrind, 1983 ):

The number of never-married persons in certain British villages is highly inversely correlated with the number of field mice in the surrounding meadows. [Marriage] was considered an established cause of field mice by the village elders until the mechanisms of transmission were finally surmised: Never-married persons bring with them a disproportionate number of cats.

In this case, the number of cats ( Y ) would be a mechanism that mediates the relationship between marriage ( X ) and field mice ( Z ) because there is no longer a relationship between marriage and field mice when marriage is held constant. In the next section, we discuss limitations of conceptualizing mechanisms this way, after describing the schema format.

Finally, mechanism knowledge might be represented in the form of schemas —clusters of content-laden knowledge stored in long-term memory. Schemas are critical for inductive inference because they are general knowledge that can be used to instantiate many specific patterns ( Bartlett, 1932 ; Schank & Abelson, 1977 ). For example, if Megan tells you about her ski trip, you can already fill in a great amount of the detail without her explicitly telling you—you can assume, for example, that there was a mountain, that the ground was snowy, that warm beverages were available in the lodge, and so on. Causal mechanisms could likewise be represented as clusters of knowledge about the underlying causal relations.

Like networks, schemas are a more skeletal representation and would not necessarily implicate image-like resources. Unlike networks, however, relationships between causally adjacent variables would not necessarily be stored together. This is because two causal relationships can be “accidentally” united in a causal chain by sharing an event in common, yet not belong to the same schema. For example, we have a schema for sex causing pregnancy, and another schema for pregnancy causing nausea. But we may not have a schema for the relationship between sex and nausea. On the network view discussed earlier ( Glymour & Cheng, 1998 ), because these three events are related in a causal chain, pregnancy is a mechanism connecting sex and nausea. On the schema view, in contrast, sex and nausea might not even be seen as causally related.

To distinguish between networks and schemas, Johnson and Ahn (2015) tested people’s judgments about the transitivity of causal chains—the extent to which, given that A causes B and B causes C, A is seen as a cause of C . According to the network view, the A → C relationship should be judged as highly causal to the extent that A → B and B → C are seen as highly causal. In contrast, the schema view implies that A → C would be judged as highly causal only if A and C belong to the same schema, even if A → B and B → C are strong. This is exactly what was found. For chains that were found in a preliminary experiment to be highly schematized (e.g., Carl studied, learned the material, and got a perfect score on the test), participants gave high causal ratings to A → B, B → C , and A → C (agreeing that Carl studying caused him to get a perfect score on the test). But for chains that were not schematized (e.g., Brad drank a glass of wine, fell asleep, and had a dream), participants gave high causal ratings for A → B and B → C , but not for A → C (denying that Brad’s glass of wine made him dream). Johnson and Ahn (2015) also ruled out several normative explanations for causal intransitivity (e.g., Hitchcock, 2001 ; Paul & Hall, 2013 ). For example, causal chains can be normatively intransitive when the Markov condition is violated, but the Markov condition held for the intransitive chains. Similarly, chains can appear intransitive if one or both of the intermediate links ( A → B or B → C ) is probabilistically weak, because the overall relation ( A → C ) would then be very weak. But the transitive and intransitive chains were equated for intermediate link strength, so this explanation cannot be correct.

The lack of transitive inferences given unschematized causal chains is a natural consequence of the schema theory, but is difficult to square with the network theory. When assessing whether an event causes another, people often use a “narrative” strategy, rejecting a causal relationship between two events if they cannot generate a story leading from the cause to the effect using their background knowledge (e.g., Kahneman & Tversky, 1982 ; Taleb, 2007 ). Hence, if people store A → B and B → C in separate schemas, they could not easily generate a path leading from A to C , resulting in intransitive judgments. The very point of the network representation, however, is to allow people to make precisely such judgments—to represent, for example, the conditional independence between A and C given B , and the effects of potential interventions on A on downstream variables. Indeed, if the network view defines mechanisms in terms of such conditional independence relations, then it would require these variables to be linked together. Participants’ intransitive judgments, then, are incompatible with network representations.

Because the issue of how causal knowledge is represented is a young research topic, we think it is fertile ground for further theoretical and empirical work. The greatest challenge appears to be understanding how mechanism knowledge can have all the representational properties that it does—it has schema-like properties (e.g., causally adjacent variables are not necessarily connected in a causal network; Johnson & Ahn, 2015 ), yet it also has association-like properties (e.g., causal reasoning sometimes violates probability theory in favor of associationist principles; Rehder, 2014 ), force-like properties (e.g., vector models capture aspects of causal reasoning; Wolff, 2007 ), icon-like properties (e.g., people have the phenomenology of visual simulation in solving mechanistic reasoning problems; Hegarty, 2004 ), placeholder-like properties (e.g., our meta-representations are far richer than our representations of mechanisms; Rozenblit & Keil, 2002 ), and network-like properties (e.g., people are sometimes able to perform sophisticated probabilistic reasoning in accord with Bayesian networks; Gopnik et al., 2004 ).

One view is that Bayesian network theories will ultimately be able to encompass many of these representational properties ( Danks, 2005 ). Although one version of the network theory equates mechanism knowledge with representing the causal graph ( Glymour & Cheng, 1998 ), other network-based theories might be more flexible (e.g., Griffiths & Tenenbaum, 2009 ). For example, Pearl (2000 , pp. xv–xvi) writes:

In this tradition [of Pearl’s earlier book Probabilistic Reasoning in Intelligent Systems ( 1988 )], probabilistic relationships constitute the foundations of human knowledge, whereas causality simply provides useful ways of abbreviating and organizing intricate patterns of probabilistic relationships. Today, my view is quite different. I now take causal relationships to be the fundamental building blocks both of physical reality and of human understanding of that reality, and I regard probabilistic relationships as but the surface phenomena of the causal machinery that underlies and propels our understanding of the world.

That is, our causal knowledge might be represented on two levels—at the level of causal graphs that represent probabilities and counterfactual entailments, and at a lower level that represents the operation of physical causal mechanisms. This view does not seem to capture all of the empirical evidence, as the results of Johnson and Ahn (2015) appear to challenge any theory that posits representations of causal networks without significant qualifications. Nonetheless, theories that combine multiple representational formats and explain the relations among them are needed to account for the diverse properties of mechanism knowledge.

Another largely open question is where the content of these representations comes from. For example, to the extent that mechanism knowledge is stored in a schema format, where do those schemas come from? That is, which event categories become clustered together in memory, and which do not? Little is known about this, perhaps because schema formation is multiply determined, likely depending on factors such as spatial and temporal contiguity, frequency of encounter, and others. This problem is similar in spirit and difficulty to the problem of why we have the particular concepts that we do. Why do we have the concept of “emerald” but not the concept of “emeruby” (an emerald before 1997 or a ruby after 1997; Goodman, 1955 )? Likewise, why do we have a schema for pregnancy and a schema for nausea, but not a schema that combines the two? Although we describe prior research below on how people learn causal mechanisms, this existing work does not resolve the issue of where causal schemas come from.

Learning Causal Mechanisms

In this section, we address how mechanism knowledge is learned. Associationist and network theories have usually emphasized learning from statistical induction (e.g., Glymour & Cheng, 1998 ). However, these theories can also accommodate the possibility that much or even most causal knowledge comes only indirectly from statistical induction. For example, some mechanisms could have been induced by our ancestors and passed to us by cultural evolution (and transmitted by testimony and education) or biological evolution (and transmitted by the selective advantage of our more causally enlightened ancestors). Although the bulk of empirical work on the acquisition of mechanisms focused on statistical induction, we also summarize what is known about three potential indirect learning mechanisms—testimony, reasoning, and perception.

Direct Statistical Induction

If mechanisms are essentially patterns of covariation, as some theorists argue ( Glymour & Cheng, 1998 , ; Pearl, 1988 ), then the most direct way to learn about mechanisms is by inducing these patterns through statistical evidence. In fact, people are often able to estimate the probability of a causal relationship between two variables from contingency data (e.g., Griffiths & Tenenbaum, 2005 ; see also Rottman, Chapter 6 in this volume). However, mechanisms involve more than two variables, and the ability to learn causal relationships from contingency data largely vanishes when additional variables are introduced. For instance, in Steyvers, Wagenmakers, Blum, and Tenenbaum (2003) , participants were trained to distinguish between three-variable common cause (i.e., A causes both B and C ) and common effect (i.e., A and B both cause C ). Although performance was better than chance levels (50% accuracy), it was nonetheless quite poor—less than 70% accuracy on average even after 160 trials, with nearly half of participants performing no better than chance. (For similar results, see Hashem & Cooper, 1996 , and White, 2006 .) Although people are better able to learn from intervention than from mere observation ( Kushnir & Gopnik, 2005 ; Lagnado & Sloman, 2004 ; Waldmann & Hagmayer, 2005 ; see also Bramley, Lagnado, & Speekenbrink, 2015 ; Coenen, Rehder, & Gureckis, 2015 ), they are still quite poor at learning multivariable causal structures. In Steyvers et al. (2003) , learners allowed to intervene achieved only 33% accuracy at distinguishing among the 18 possible configurations of three variables (compared to 5.6% chance performance and 100% optimal performance). For the complex causal patterns at play in the real world, it seems unlikely that people rely on observational or interventional learning of multivariable networks as their primary strategy for acquiring mechanism knowledge.

Given that people have great difficulty learning a network of only three variables when presented simultaneously, a second potential learning strategy is piecemeal learning of causal networks. That is, instead of learning relations among multiple variables at once, people may first acquire causal relationships between two variables, and then combine them into larger networks ( Ahn & Dennis, 2000 ; Fernbach & Sloman, 2009 ). For example, Baetu and Baker (2009) found that people who learned a contingency between A and B and between B and C inferred an appropriate contingency between A and C , suggesting that participants had used the principle of causal transitivity to combine inferences about these disparate links (for similar findings, see Goldvarg & Johnson-Laird, 2001 ; von Sydow, Meder, & Hagmayer, 2009 ). 1 Although more work will be necessary to test the boundary conditions on piecemeal construction of causal networks (e.g., Johnson & Ahn, 2015 ), this appears to be a more promising strategy for acquiring knowledge of complex causal mechanisms.

Learning networks of causal relations from contingency data is challenging, whether from observations or from interventions, likely as a result of our computational limits. Hence, it seems unlikely that we induce all of our mechanism knowledge from statistical learning (see Ahn & Kalish, 2000 ), even if direct statistical induction plays some role. Where might these other beliefs about causal mechanisms come from?

Indirect Sources of Mechanism Knowledge

Much of our mechanism knowledge appears to come not directly from induction over observations, but from other sources, such as testimony from other people or explicit education, reasoning from other beliefs, and perhaps perception. Although relatively little work has addressed the roles of these sources in acquiring mechanism knowledge in particular, each has been implicated in causal learning more generally.

Testimony and Cultural Evolution

Much of our mechanism knowledge seems to come from family members and peers, from experts, and from formal and informal education. Children are famously curious, and renowned for their enthusiasm for asking series of “why” questions that probe for underlying mechanisms. Although parents are an important resource in children’s learning (e.g., Callanan & Oakes, 1992 ), parents’ knowledge is necessarily limited by their expertise. However, children’s (and adults’) ability to seek out and learn from experts puts them in a position to acquire mechanism knowledge when unavailable from more immediate informants ( Mills, 2013 ; Sobel & Kushnir, 2013 ; Sperber et al., 2010 ). In particular, children have an understanding of how knowledge is distributed across experts ( Lutz & Keil, 2002 ) and which causal systems are sufficiently rich or “causally dense” that they would have experts ( Keil, 2010 ).

Further, the growth of mechanism knowledge not only over ontogeny but over history points to powerful mechanisms of cultural evolution (Boyd & Richerson, 1985; Dawkins, 1976 ). Successive generations generate new scientific knowledge and transmit a subset of that knowledge to the public and to other scientists. Most experimental and computational work in cultural evolution has focused on how messages are shaped over subsequent generations ( Bartlett, 1932 ; Griffiths, Kalish, & Lewandowsky, 2008 ), how languages evolve ( Nowak, Komarova, & Niyogi, 2001 ), or how beliefs and rituals are propagated ( Boyer, 2001 ). Less is known from a formal or experimental perspective about how cultural evolution impacts the adoption of scientific ideas (but see Kuhn, 1962 ). Nonetheless, it is clear that the succession of ideas over human history are guided in large part by a combination of scientific scrutiny and cultural selection, and that these forces therefore contribute to the mechanism knowledge that individual cognizers bring to bear on the world.

Imagine you have done the hard work of understanding the mechanisms underlying the circulatory system of elephants—perhaps by conducting observations and experiments, or through explicit education. It would be sad indeed if this hard-won mechanism knowledge were restricted to causal reasoning about elephants. What about specific kinds of elephants? Mammals in general? Particular mammals like zebras?

Beliefs are not informational islands. Rather, we can use reasoning to extend knowledge from one domain to another. We can use deductive reasoning to extend our general knowledge about elephant circulation “forward” to African elephant circulation ( Johnson-Laird & Byrne, 1991 ; Rips, 1994 ; Stenning & van Lambalgen, 2008 ; see Oaksford & Chater, Chapter 19 in this volume, and Over, Chapter 18 in this volume). We can use analogical reasoning to extend our knowledge of elephant circulation “sideways” to similar organisms like zebras ( Gentner & Markman, 1997 ; Hofstadter, 2014 ; Holyoak & Thagard, 1997 ; see Holyoak & Lee, Chapter 24 in this volume). And we can use abductive reasoning to extend our knowledge “backward” to mammals ( Keil, 2006 ; Lipton, 2004 ; Lombrozo, 2012 ; see Lombrozo & Vasilyeva, Chapter 22 in this volume, and Meder & Mayrhofer, Chapter 23 in this volume); indeed, Ahn and Kalish (2000) suggested that abductive reasoning is a particularly important process underlying mechanistic causal reasoning. Although these reasoning strategies do not always lead to veridical beliefs (e.g., Lipton, 2004 ; Stenning & van Lambalgen, 2008 ), they seem to do well often enough that they can be productive sources of hypotheses about causal mechanisms, and they may be accurate enough to support causal inference in many realistic circumstances without exceeding our cognitive limits.

Intuitively, we sometimes seem to learn mechanisms from simply watching those mechanisms operate in the world (see White, Chapter 14 in this volume). For example, you might observe a bicycle in operation, and draw conclusions about the underlying mechanisms from these direct observations. Indeed, much evidence supports the possibility that people can visually perceive individual causal relations ( Michotte, 1946/1963 ; Rolfs, Dambacher, & Cavanagh, 2013 ; see White, 2009a , for a review, and Rips, 2011 , for a contrary view). Haptic experiences may also play a role in identifying causal relations (White, 2012 , 2014 ; Wolff & Shepard, 2013 ). Just as people seem to learn about individual causal relationships from statistical information and combine them together into more detailed mechanism representations ( Ahn & Dennis, 2000 ; Fernbach & Sloman, 2009 ), people may likewise be able to learn about individual causal events from visual experience, and combine these into larger mechanism representations.

However, we should be cautious in assuming that we rely strongly on perceptual learning for acquiring mechanism knowledge, because little work has addressed this question directly, and people are susceptible to metacognitive illusions ( Rozenblit & Keil, 2002 ). For example, Lawson (2006) found that people have poor understanding of how bicycles work, and when asked to depict a bicycle from memory, often draw structures that would be impossible to operate (e.g., because the frame would prevent the wheels from turning). These errors were found even for bicycle experts and people with a physical bicycle in front of them while completing the task (see also Rozenblit & Keil, 2002 ). Hence, in many cases, what appears to be a mechanism understood through direct perceptual means is in fact something far more schematic and incomplete, derived from long-term memory.

One major open question concerns the balance among these direct and indirect sources. Do we acquire many of our mechanism beliefs through statistical induction, despite our difficulty with learning networks of variables, or is the majority of our causal knowledge derived from other indirect sources? When we combine individual causal relations into mechanism representations, do we do so only with relations learned statistically, or are we also able to combine disparate relations learned through testimony, reasoning, or perception? To what extent can these causal maps combine relations learned through different strategies? Put differently, do these learning strategies all produce mechanism representations of the same format, or do they contribute different sorts of representations that may be difficult to combine into a larger picture?

Another challenge for future research will be investigating the extent to which these sources contribute not only to learning general causal knowledge (learning that A causes B ) but also mechanism knowledge (learning why A causes B ). The majority of the evidence summarized earlier concerns only general causal knowledge, so the contribution of these indirect sources to acquiring mechanism knowledge should be addressed empirically.

Finally, might some mechanism knowledge be conveyed through the generations not only through cultural evolution, but also through biological evolution? It is controversial to what extent we have innate knowledge (e.g., Carey, 2009 ; Elman et al., 1996 ), and less clear still to what extent we have innate knowledge of causal mechanisms. Nonetheless, we may be born with some highly schematic, skeletal representations of mechanisms. For example, 4-month-old infants appear to understand the fundamental explanatory principles of physics (e.g., Spelke, Breinlinger, Macomber, & Jacobson, 1992 ), including physical causality ( Leslie & Keeble, 1987 ); belief-desire psychology emerges in a schematic form by 12 months ( Gergely & Csibra, 2003 ); and young children use the principles of essentialism ( Keil, 1989 ), vitalism ( Inagaki & Hatano, 2004 ), and inherence ( Cimpian & Salomon, 2014 ) to understand the behavior of living things. These rudimentary explanatory patterns may provide candidate mechanisms underlying many more specific causal relationships observed in the world. To the extent that these patterns are innate, we might be born with some highly skeletal understanding of causal mechanisms that can underlie later learning.

Conclusions

The chapters in this volume demonstrate the depth to which causality pervades our thinking. In this chapter, we have argued further that knowledge of causal mechanisms pervades our causal understanding. First, when deciding whether a relationship is causal, mechanism knowledge can override other cues to causality. It provides evidence over and above covariation, and a mechanism can even change the interpretation of new covariation information; it can result in violations of the causal Markov condition—a critical assumption for statistical reasoning via Bayesian networks; and it can alter expectations about temporal delays, moderating the effect of temporal proximity on causal judgment. Second, mechanism knowledge is crucial to inductive inference. It affects which categories are used and induced; how strongly an exemplar’s features are projected onto other exemplars; how likely we are to extend a property from one category to another; and how we make category-based probability judgments, producing discounting and conjunction effects.

Mechanism knowledge is also key to how causal relations are mentally represented. Several representational formats have been proposed—associations, forces or powers, icons, placeholders, networks, and schemas. Although there are likely to be elements of all of these formats in our mechanism knowledge, two positive empirical conclusions are clear. First, people’s meta-representations of causal knowledge are far richer than their actual causal knowledge, suggesting that our representations include abstract placeholders or “pointers” to real-world referents that are not stored in the head. Second, however, people do represent some mechanism content, and this content appears to often take the form of causal schemas. Future theoretical and empirical work should address how the various properties of mechanism knowledge can be understood in a single framework.

Mechanisms may be acquired in part through statistical induction. However, because people are poor at learning networks of three or more variables by induction, it is more likely that people learn causal relations individually and assemble them piecemeal into larger networks. People also seem to use other learning strategies for acquiring mechanism knowledge, such as testimony, reasoning, and perhaps perception. How these strategies interact, and whether they produce different sorts of representations, are open questions.

Although we would not claim that all reasoning about causation is reasoning about mechanisms, mechanisms are central to many of our nearest and dearest inferential processes. Hence, understanding the representation and acquisition of mechanism knowledge can help to cut to the core of causal thinking, and much of the cognition that it makes possible.

Although this result may appear to conflict with the results of Johnson and Ahn (2015) , which demonstrated causal intransitivity in some causal chains, the two sets of findings can be reconciled, because Johnson and Ahn (2015) used familiar stimuli for which people could expect to have schematized knowledge, whereas Baetu and Baker (2009) used novel stimuli. In reasoning about novel stimuli, people would not use a narrative strategy (i.e., trying to think of a story connecting the causal events), but would instead use a statistical ( Baetu & Baker, 2009 ) or rule-based strategy ( Goldvarg & Johnson-Laird, 2001 ). The lack of schematized knowledge would not block transitive inferences under these reasoning strategies.

Ahn, W. , & Bailenson, J. ( 1996 ). Causal attribution as a search for underlying mechanisms: An explanation of the conjunction fallacy and the discounting principle.   Cognitive Psychology , 31 , 82–123.

Google Scholar

Ahn, W. , & Dennis, M. J. ( 2000 ). Induction of causal chains. In L. R. Gleitman & A. K. Joshi (Eds.), Proceedings of the 22nd annual conference of the Cognitive Science Society (pp. 19–24). Mahwah, NJ: Lawrence Erlbaum Associates.

Google Preview

Ahn, W. , & Kalish, C. W. ( 2000 ). The role of mechanism beliefs in causal reasoning. In F. C. Keil & R. A. Wilson (Eds.), Explanation and cognition (pp. 199–226). Cambridge, MA: MIT Press.

Ahn, W. , Kalish, C. W. , Medin, D. L. , & Gelman, S. A. ( 1995 ). The role of covariation versus mechanism information in causal attribution.   Cognition , 54 , 299–352.

Aristotle . ( 1970 ). Physics, Books I–II . ( W. Charlton , Trans.) Oxford: Clarendon Press.

Baetu, I. , & Baker, A. G. ( 2009 ). Human judgments of positive and negative causal chains.   Journal of Experimental Psychology: Animal Behavior Processes , 35 , 153–168.

Barbey, A. K. , & Wolff, P. ( 2007 ). Learning causal structure from reasoning. In D. S. McNamara & J. G. Trafton (Eds.), Proceedings of the 29th annual conference of the Cognitive Science Society (pp. 713–718). Austin, TX: Cognitive Science Society.

Barsalou, L. W. ( 1999 ). Perceptual symbol systems.   Behavioral and Brain Sciences , 22 , 577–660.

Bartlett, F. C. ( 1932 ). Remembering: An experimental and social study . Cambridge, UK: Cambridge University Press.

Baumrind, D. ( 1983 ). Specious causal attributions in the social sciences: The reformulated stepping-stone theory of heroin use as exemplar.   Journal of Personality and Social Psychology , 45 , 1289–1298.

Boyd, R. , & Richerson, P. J. ( 2005 ). The origin and evolution of cultures . Oxford: Oxford University Press.

Boyer, P. ( 2001 ). Religion explained: The evolutionary origins of religious thought . New York: Basic Books.

Bramley, N. R. , Lagnado, D. A. , & Speekenbrink, M. ( 2015 ). Conservative forgetful scholars: How people learn causal structure through sequences of interventions.   Journal of Experimental Psychology: Learning, Memory, and Cognition , 41 , 708–731.

Buehner, M. J. , & May, J. ( 2002 ). Knowledge mediates the timeframe of covariation assessment in human causal induction.   Thinking & Reasoning , 8 , 269–293.

Buehner, M. J. , & May, J. ( 2003 ). Rethinking temporal contiguity and the judgement of causality: Effects of prior knowledge, experience, and reinforcement procedure.   The Quarterly Journal of Experimental Psychology , 56 A , 865–890.

Buehner, M. J. , & May, J. ( 2004 ). Abolishing the effect of reinforcement delay on human causal learning.   The Quarterly Journal of Experimental Psychology , 57 B , 179–191.

Buehner, M. J. , & McGregor, S. ( 2006 ). Temporal delays can facilitate causal attribution: Towards a general timeframe bias in causal induction.   Thinking & Reasoning , 12 , 353–378.

Bullock, M. , Gelman, R. , & Baillargeon, R. ( 1982 ). The development of causal reasoning. In W. J. Friedman (Ed.), The developmental psychology of time (pp. 209–254). New York: Academic Press.

Callanan, M. A. , & Oakes, L. M. ( 1992 ). Preschoolers’ questions and parents’ explanations: Causal thinking in everyday activity.   Cognitive Development , 7 , 213–233.

Carey, S. ( 2009 ). The origin of concepts . Oxford: Oxford University Press.

Cimpian, A. , & Salomon, E. ( 2014 ). The inherence heuristic: An intuitive means of making sense of the world, and a potential precursor to psychological essentialism.   Behavioral and Brain Sciences , 37 , 461–527.

Coenen, A. , Rehder, B. , Gureckis, T. ( 2015 ). Strategies to intervene on causal systems are adaptively selected.   Cognitive Psychology , 79 , 102–133.

Cushman, F. ( 2008 ). Crime and punishment: Distinguishing the roles of causal and intentional analyses in moral judgment.   Cognition , 108 , 353–380.

Danks, D. ( 2005 ). The supposed competition between theories of human causal inference.   Philosophical Psychology , 18 , 259–272.

Dawkins, R. ( 1976 ). The selfish gene . Oxford: Oxford University Press.

Dowe, P. ( 2000 ). Physical causation . Cambridge, UK: Cambridge University Press.

Einhorn, H. J. , & Hogarth, R. M. ( 1986 ). Judging probable cause.   Psychological Bulletin , 99 , 3–19.

Elman, J. L. , Bates, E. A. , Johnson, M. H. , Karmiloff-Smith, A. , Parisi, D. , & Plunkett, K. ( 1996 ). Rethinking innateness: A connectionist perspective on development . Cambridge, MA: MIT Press.

Fernbach, P. M. , Darlow, A. , & Sloman, S. A. ( 2010 ). Neglect of alternative causes in predictive but not diagnostic reasoning.   Psychological Science , 21 , 329–36.

Fernbach, P. M. , Darlow, A. , & Sloman, S. A. ( 2011 ). Asymmetries in predictive and diagnostic reasoning.   Journal of Experimental Psychology: General , 140 , 168–85.

Fernbach, P. M. , & Sloman, S. A. ( 2009 ). Causal learning with local computations.   Journal of Experimental Psychology: Learning, Memory, and Cognition , 35 , 678–693.

Fischhoff, B. , Slovic, P. , & Lichtenstein, S. ( 1977 ). Knowing with certainty: The appropriateness of extreme confidence.   Journal of Experimental Psychology: Human Perception and Performance , 3 , 552–564.

Focht, D. R. , III, Spicer, C. , & Fairchok, M. P. ( 2002 ). The efficacy of duct tape vs cryotherapy in the treatment of verruca vulgaris (the common wart).   Archives of Pediatrics & Adolescent Medicine , 156 , 971–974.

Forbus, K. D. ( 1984 ). Qualitative process theory.   Artificial Intelligence , 24 , 85–168.

Fugelsang, J. A. , & Thompson, V. A. ( 2000 ). Strategy selection in causal reasoning: When beliefs and covariation collide.   Canadian Journal of Experimental Psychology , 54 , 15–32.

Fugelsang, J. A. , Stein, C. B. , Green, A. E. , & Dunbar, K. N. ( 2004 ). Theory and data interactions of the scientific mind: Evidence from the molecular and the cognitive laboratory.   Canadian Journal of Experimental Psychology , 58 , 86–95.

Gentner, D. , & Markman, A. B. ( 1997 ). Structure mapping in analogy and similarity.   American Psychologist , 52 , 45–56.

Gergely, G. , & Csibra, G. ( 2003 ). Teleological reasoning in infancy: The naive theory of rational action.   Trends in Cognitive Sciences , 7 , 287–292.

Glennan, S. S. ( 1996 ). Mechanisms and the nature of causation.   Erkenntnis , 44 , 49–71.

Glymour, C. , & Cheng, P. W. ( 1998 ). Causal mechanism and probability: A normative approach. In M. Oaksford & N. Chater (Eds.), Rational models of cognition (pp. 295–313). Oxford: Oxford University Press.

Goldvarg, E. , & Johnson-Laird, P. N. ( 2001 ). Naive causality: A mental model theory of causal meaning and reasoning.   Cognitive Science , 25 , 565–610.

Goodman, N. ( 1955 ). Fact, fiction, and forecast . Cambridge, MA: Harvard University Press.

Gopnik, A. , Glymour, C. , Sobel, D. M. , Schulz, L. E. , Kushnir, T. , & Danks, D. ( 2004 ). A theory of causal learning in children: Causal maps and Bayes nets.   Psychological Review , 111 , 3–32.

Griffiths, T. L. , Kalish, M. L. , & Lewandowsky, S. ( 2008 ). Theoretical and empirical evidence for the impact of inductive biases on cultural evolution.   Proceedings of the Royal Society, B , 363 , 3503–3514.

Griffiths, T. L. , & Tenenbaum, J. B. ( 2005 ). Structure and strength in causal induction.   Cognitive Psychology , 51 , 334–384.

Griffiths, T. L. , & Tenenbaum, J. B. ( 2009 ). Theory-based causal induction.   Psychological Review , 116 , 661–716.

Hagmayer, Y. , Meder, B. , von Sydow, M. , & Waldmann, M. R. ( 2011 ). Category transfer in sequential causal learning: The unbroken mechanism hypothesis.   Cognitive Science , 35 , 842–873.

Harré, R. , & Madden, E. H. ( 1975 ). Causal powers: A theory of natural necessity . Lanham, MD: Rowman & Littlefield.

Hashem, A. I. , & Cooper, G. F. (1996). Human causal discovery from observational data. Proceedings of the AMIA Annual Fall Symposium , 27–31. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2233172/ .

Hegarty, M. ( 1992 ). Mental animation: Inferring motion from static displays of mechanical systems.   Journal of Experimental Psychology: Learning, Memory, and Cognition , 18 , 1084–1102.

Hegarty, M. ( 2004 ). Mechanical reasoning by mental simulation.   Trends in Cognitive Sciences , 8 , 280–285.

Hegarty, M. , & Sims, V. K. ( 1994 ). Individual differences in mental animation during mechanical reasoning.   Memory & Cognition , 22 , 411–430.

Heit, E. ( 2000 ). Properties of inductive reasoning.   Psychonomic Bulletin & Review , 7 , 569–592.

Hitchcock, C. ( 2001 ). The intransitivity of causation revealed in equations and graphs.   The Journal of Philosophy , 98 , 273–299.

Hofstadter, D. R. ( 2014 ). Surfaces and essences: Analogy as the fuel and fire of thought . New York: Basic Books.

Holyoak, K. J. , & Thagard, P. ( 1997 ). The analogical mind.   American Psychologist , 52 , 35–44.

Hume, D. ( 1748 /1977). An enquiry concerning human understanding . Indianapolis, IN: Hackett.

Inagaki, K. , & Hatano, G. ( 2004 ). Vitalistic causality in young children’s naive biology.   Trends in Cognitive Sciences , 8 , 356–362.

Johnson, S. G. B. , & Ahn, W. ( 2015 ). Causal networks or causal islands? The representation of mechanisms and the transitivity of causal judgment.   Cognitive Science , 39 , 1468–1503.

Johnson, S. G. B. , & Keil, F. C. ( 2014 ). Causal inference and the hierarchical structure of experience.   Journal of Experimental Psychology: General , 143 , 2223–2241.

Johnson-Laird, P. N. , & Byrne, R. M. J. ( 1991 ). Deduction: Essays in cognitive psychology . Hillsdale, NJ: Lawrence Erlbaum Associates.

Kahneman, D. , & Tversky, A. ( 1982 ). The simulation heuristic. In D. Kahneman , P. Slovic , & A. Tversky (Eds.), Judgment under uncertainty: Heuristics and biases (pp. 201–208). Cambridge, UK: Cambridge University Press.

Keil, F. C. ( 1989 ). Concepts, kinds, and cognitive development . Cambridge, MA: MIT Press.

Keil, F. C. ( 2006 ). Explanation and understanding.   Annual Review of Psychology , 57 , 227–254.

Keil, F. C. ( 2010 ). The feasibility of folk science.   Cognitive Science , 34 , 826–862.

Kelley, H. H. ( 1973 ). The processes of causal attribution.   American Psychologist , 28 , 107–128.

Kemp, C. , & Jern, A. ( 2013 ). A taxonomy of inductive problems.   Psychonomic Bulletin & Review , 21 , 23–46.

Koslowski, B. ( 1996 ). Theory and evidence: The development of scientific reasoning . Cambridge, MA: MIT Press.

Kripke, S. ( 1980 ). Naming and necessity . Oxford: Blackwell.

Kuhn, T. S. ( 1962 ). The structure of scientific revolutions . Chicago: University of Chicago Press.

Kushnir, T. , & Gopnik, A. ( 2005 ). Young children infer causal strength from probabilities and interventions.   Psychological Science , 16 , 678–683.

Lagnado, D. A. , & Sloman, S. A. ( 2004 ). The advantage of timely intervention.   Journal of Experimental Psychology: Learning, Memory, and Cognition , 30 , 856–876.

Lagnado, D. A. , & Sloman, S. A. ( 2006 ). Time as a guide to cause.   Journal of Experimental Psychology: Learning, Memory, and Cognition , 32 , 451–460.

Lawson, R. ( 2006 ). The science of cycology: Failures to understand how everyday objects work.   Memory & Cognition , 34 , 1667–1675.

Leslie, A. M. , & Keeble, S. ( 1987 ). Do six-month-old infants perceive causality?   Cognition , 25 , 265–288.

Lipton, P. ( 2004 ). Inference to the best explanation (2nd ed.). London: Routledge.

Lombrozo, T. ( 2010 ). Causal-explanatory pluralism: How intentions, functions, and mechanisms influence causal ascriptions.   Cognitive Psychology , 61 , 303–32.

Lombrozo, T. ( 2012 ). Explanation and abductive inference. In K. J. Holyoak & R. G. Morrison (Eds.), Oxford handbook of thinking and reasoning (pp. 260–276). Oxford: Oxford University Press.

Lombrozo, T. , & Carey, S. ( 2006 ). Functional explanation and the function of explanation.   Cognition , 99 , 167–204.

Lutz, D. J. , & Keil, F. C. ( 2002 ). Early understanding of the decision of cognitive labor.   Child Development , 73 , 1073–1084.

Machamer, P. , Darden, L. , & Craver, C. F. ( 2000 ). Thinking about mechanisms.   Philosophy of Science , 67 , 1–25.

Markman, A. B. ( 1999 ). Knowledge representation . Mahwah, NJ: Lawrence Erlbaum Associates.

Medin, D. L. , Coley, J. D. , Storms, G. , & Hayes, B. K. ( 2003 ). A relevance theory of induction.   Psychonomic Bulletin & Review , 10 , 517–532.

Medin, D. L. , & Ortony, A. ( 1989 ). Psychological essentialism. In S. Vosniadou & A. Ortony (Eds.), Similarity and analogical reasoning . Cambridge, UK: Cambridge University Press.

Michotte, A. ( 1946 /1963). The perception of causality . ( T. R. Miles & E. Miles , Trans.). New York: Basic Books.

Mills, C. M. ( 2013 ). Knowing when to doubt: Developing a critical stance when learning from others.   Developmental Psychology , 49 , 404–418.

Murphy, G. L. ( 2002 ). The big book of concepts . Cambridge, MA: MIT Press.

Murphy, G. L. , & Allopenna, P. D. ( 1994 ). The locus of knowledge effects in concept learning.   Journal of Experimental Psychology: Learning, Memory, and Cognition , 20 , 904–919.

Murphy, G. L. , & Medin, D. L. ( 1985 ). The role of theories in conceptual coherence.   Psychological Review , 92 , 289–316.

Murphy, G. L. , & Wisniewski, E. J. ( 1989 ). Feature correlations in conceptual representations. In Advances in cognitive science , Vol. 2: Theory and applications (pp. 23–45). Chichester, UK: Ellis Horwood.

Nowak, M. A. , Komarova, N. L. , & Niyogi, P. ( 2001 ). Evolution of universal grammar.   Science , 291 , 114–118.

Park, J. , & Sloman, S. A. ( 2013 ). Mechanistic beliefs determine adherence to the Markov property in causal reasoning.   Cognitive Psychology , 67 , 186–216.

Park, J. , & Sloman, S. A. ( 2014 ). Causal explanation in the face of contradiction.   Memory & Cognition , 42 , 806–820.

Patalano, A. L. , & Ross, B. H. ( 2007 ). The role of category coherence in experience-based prediction.   Psychonomic Bulletin & Review , 14 , 629–634.

Paul, L. A. , & Hall, N. ( 2013 ). Causation: A user’s guide . Oxford: Oxford University Press.

Pearl, J. ( 1988 ). Probabilistic reasoning in intelligent systems: Networks of plausible inference . San Francisco: Morgan Kaufmann.

Pearl, J. ( 2000 ). Causality: Models, reasoning, and inference . Cambridge, UK: Cambridge University Press.

Pennington, N. , & Hastie, R. ( 1988 ). Explanation-based decision making: Effects of memory structure on judgment.   Journal of Experimental Psychology: Learning, Memory, and Cognition , 14 , 521–533.

Putnam, H. ( 1975 ). The meaning of “meaning.” In K. Gunderson (Ed.), Language, mind, and knowledge (pp. 131–193). Minneapolis: University of Minnesota Press.

Pylyshyn, Z. W. ( 1973 ). What the mind’s eye tells the mind’s brain: A critique of mental imagery.   Psychological Bulletin , 80 , 1–24.

Rehder, B. ( 2014 ). Independence and dependence in human causal reasoning.   Cognitive Psychology , 72 , 54–107.

Rehder, B. , & Burnett, R. C. ( 2005 ). Feature inference and the causal structure of categories.   Cognitive Psychology , 50 , 264–314.

Rehder, B. , & Hastie, R. ( 2004 ). Category coherence and category-based property induction.   Cognition , 91 , 113–153.

Rehder, B. , & Ross, B. H. ( 2001 ). Abstract coherent categories.   Journal of Experimental Psychology: Learning, Memory, and Cognition , 27 , 1261–1275.

Rips, L. J. ( 1984 ). Mental muddles. In M. Brand & R. M. Harnish (Eds.), The representation of knowledge and belief . Tuscon: University of Arizona Press.

Rips, L. J. ( 1994 ). The psychology of proof . Cambridge, MA: MIT Press.

Rips, L. J. ( 2011 ). Causation from perception.   Perspectives on Psychological Science , 6 , 77–97.

Rolfs, M. , Dambacher, M. , & Cavanagh, P. ( 2013 ). Visual adaptation of the perception of causality.   Current Biology , 23 , 250–254.

Rottman, B. M. , & Hastie, R. ( 2014 ). Reasoning about causal relationships: Inferences on causal networks.   Psychological Bulletin , 140 , 109–139.

Rottman, B. M. , & Keil, F. C. ( 2012 ). Causal structure learning over time: Observations and interventions.   Cognitive Psychology , 64 , 93–125.

Rozenblit, L. , & Keil, F. C. ( 2002 ). The misunderstood limits of folk science: An illusion of explanatory depth.   Cognitive Science , 26 , 521–562.

Salmon, W. C. ( 1984 ). Scientific explanation and the causal structure of the world . Princeton, NJ: Princeton University Press.

Schank, R. , & Abelson, R. ( 1977 ). Scripts, plans, goals, and understanding: An inquiry into human knowledge structures . New York: Psychology Press.

Schlottmann, A. ( 1999 ). Seeing it happen and knowing how it works: How children understand the relation between perceptual causality and underlying mechanism.   Developmental Psychology , 35 , 303–317.

Schwartz, D. L. , & Black, J. B. ( 1996 ). Shuttling between depictive models and abstract rules: Induction and fallback.   Cognitive Science , 20 , 457–497.

Shanks, D. R. ( 1987 ). Associative accounts of causality judgment.   Psychology of Learning and Motivation , 21 , 229–261.

Shanks, D. R. , Pearson, S. M. , & Dickinson, A. ( 1989 ). Temporal contiguity and the judgement of causality by human subjects.   The Quarterly Journal of Experimental Psychology , 41 , 139–159.

Shultz, T. R. , Fisher, G. W. , Pratt, C. C. , & Rulf, S. ( 1986 ). Selection of causal rules.   Child Development , 57 , 143–152.

Simon, H. A. ( 1996 ). The sciences of the artificial (3rd ed.). Cambridge, MA: MIT Press.

Sims, V. K. , & Hegarty, M. ( 1997 ). Mental animation in the visuospatial sketchpad: Evidence from dual-task studies.   Memory & Cognition , 25 , 321–332.

Sloman, S. A. ( 1994 ). When explanations compete: The role of explanatory coherence on judgements of likelihood.   Cognition , 52 , 1–21.

Sobel, D. M. , & Kushnir, T. ( 2013 ). Knowledge matters: How children evaluate the reliability of testimony as a process of rational inference.   Psychological Review , 120 , 779–797.

Spelke, E. S. , Breinlinger, K. , Macomber, J. , & Jacobson, K. ( 1992 ). Origins of knowledge.   Psychological Review , 99 , 605–632.

Spellman, B. A. , Price, C. M. , & Logan, J. M. ( 2001 ). How two causes are different from one: The use of (un)conditional information in Simpson’s paradox.   Memory & Cognition , 29 , 193–208.

Sperber, D. , Clément, F. , Heintz, C. , Mascaro, O. , Mercier, H. , Origgi, G. , & Wilson, D. ( 2010 ). Epistemic vigilance.   Mind & Language , 25 , 359–393.

Spirtes, P. , Glymour, C. , & Scheines, R. ( 1993 ). Causation, prediction, and search . New York: Springer.

Stenning, K. , & van Lambalgen, M. ( 2008 ). Human reasoning and cognitive science . Cambridge, MA: MIT Press.

Steyvers, M. , Tenenbaum, J. B. , Wagenmakers, E. , & Blum, B. ( 2003 ). Inferring causal networks from observations and interventions.   Cognitive Science , 27 , 453–489.

Taleb, N. N. ( 2007 ). The black swan: The impact of the highly improbable . New York: Random House.

Talmy, L. ( 1988 ). Force dynamics in language and cognition.   Cognitive Science , 12 , 49–100.

Tversky, A. , & Kahneman, D. ( 1981 ). Evidential impact of base rates . Technical Report. Palo Alto, CA: Office of Naval Research.

Tversky, A. , & Kahneman, D. ( 1983 ). Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment.   Psychological Review , 90 , 293–315.

Von Sydow, M. , Meder, B. , & Hagmayer, Y. ( 2009 ). A transitivity heuristic of probabilistic causal reasoning. In N. A. Taatgen & H. van Rign (Eds.), Proceedings of the 31st annual conference of the Cognitive Science Society (pp. 803–808). Austin, TX: Cognitive Science Society.

Waldmann, M. R. ( 2000 ). Competition among causes but not effects in predictive and diagnostic learning.   Journal of Experimental Psychology: Learning, Memory, and Cognition , 26 , 53–76.

Waldmann, M. R. , & Hagmayer, Y. ( 2005 ). Seeing versus doing: Two modes of accessing causal knowledge.   Journal of Experimental Psychology: Learning, Memory, and Cognition , 31 , 216–227.

Waldmann, M. R. , & Hagmayer, Y. ( 2006 ). Categories and causality: The neglected direction.   Cognitive Psychology , 53 , 27–58.

Waldmann, M. R. , & Holyoak, K. J. ( 1992 ). Predictive and diagnostic learning within causal models: Asymmetries in cue competition.   Journal of Experimental Psychology: General , 121 , 222–236.

Waldmann, M. R. , & Mayrhofer, R. ( 2016 ). Hybrid causal representations. In B. Ross (Ed.), The psychology of learning and motivation (Vol. 65, pp. 85–127). San Diego: Academic Press.

White, P. A. ( 1988 ). Causal processing: Origins and development.   Psychological Bulletin , 104 , 36–52.

White, P. A. ( 1989 ). A theory of causal processing.   British Journal of Psychology , 80 , 431–454.

White, P. A. ( 2006 ). How well is causal structure inferred from cooccurrence information?   European Journal of Cognitive Psychology , 18 , 454–480.

White, P. A. ( 2009 a). Perception of forces exerted by objects in collision events.   Psychological Review , 116 , 580–601.

White, P. A. ( 2009 b). Property transmission: An explanatory account of the role of similarity information in causal inference.   Psychological Bulletin , 135 , 774–793.

White, P. A. ( 2012 ). The experience of force: The role of haptic experience of forces in visual perception of object motion and interactions, mental simulation, and motion-related judgments.   Psychological Bulletin , 138 , 589–615.

White, P. A. ( 2014 ). Singular clues to causality and their use in human causal judgment.   Cognitive Science , 38 , 38–75.

Wolff, P. ( 2003 ). Direct causation in the linguistic coding and individuation of causal events.   Cognition , 88 , 1–48.

Wolff, P. ( 2007 ). Representing causation.   Journal of Experimental Psychology: General , 136 , 82–111.

Wolff, P. , Barbey, A. K. , & Hausknecht, M. ( 2010 ). For want of a nail: How absences cause events.   Journal of Experimental Psychology: General , 139 , 191–221.

Wolff, P. , & Shepard, J. ( 2013 ). Causation, touch, and the perception of force.   Psychology of Learning and Motivation , 58 , 167–202.

  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Examples

Causal Hypothesis

define causal hypothesis in psychology

In scientific research, understanding causality is key to unraveling the intricacies of various phenomena. A causal hypothesis is a statement that predicts a cause-and-effect relationship between variables in a study. It serves as a guide to study design, data collection, and interpretation of results. This thesis statement segment aims to provide you with clear examples of causal hypotheses across diverse fields, along with a step-by-step guide and useful tips for formulating your own. Let’s delve into the essential components of constructing a compelling causal hypothesis.

What is Causal Hypothesis?

A causal hypothesis is a predictive statement that suggests a potential cause-and-effect relationship between two or more variables. It posits that a change in one variable (the independent or cause variable) will result in a change in another variable (the dependent or effect variable). The primary goal of a causal hypothesis is to determine whether one event or factor directly influences another. This type of Simple hypothesis is commonly tested through experiments where one variable can be manipulated to observe the effect on another variable.

What is an example of a Causal Hypothesis Statement?

Example 1: If a person increases their physical activity (cause), then their overall health will improve (effect).

Explanation: Here, the independent variable is the “increase in physical activity,” while the dependent variable is the “improvement in overall health.” The hypothesis suggests that by manipulating the level of physical activity (e.g., by exercising more), there will be a direct effect on the individual’s health.

Other examples can range from the impact of a change in diet on weight loss, the influence of class size on student performance, or the effect of a new training method on employee productivity. The key element in all causal hypotheses is the proposed direct relationship between cause and effect.

100 Causal Hypothesis Statement Examples

Causal Hypothesis Statement Examples

Size: 185 KB

Causal hypotheses predict cause-and-effect relationships, aiming to understand the influence one variable has on another. Rooted in experimental setups, they’re essential for deriving actionable insights in many fields. Delve into these 100 illustrative examples to understand the essence of causal relationships.

  • Dietary Sugar & Weight Gain: Increased sugar intake leads to weight gain.
  • Exercise & Mental Health: Regular exercise improves mental well-being.
  • Sleep & Productivity: Lack of adequate sleep reduces work productivity.
  • Class Size & Learning: Smaller class sizes enhance student understanding.
  • Smoking & Lung Disease: Regular smoking causes lung diseases.
  • Pesticides & Bee Decline: Use of certain pesticides leads to bee population decline.
  • Stress & Hair Loss: Chronic stress accelerates hair loss.
  • Music & Plant Growth: Plants grow better when exposed to classical music.
  • UV Rays & Skin Aging: Excessive exposure to UV rays speeds up skin aging.
  • Reading & Vocabulary: Regular reading improves vocabulary breadth.
  • Video Games & Reflexes: Playing video games frequently enhances reflex actions.
  • Air Pollution & Respiratory Issues: High levels of air pollution increase respiratory diseases.
  • Green Spaces & Happiness: Living near green spaces improves overall happiness.
  • Yoga & Blood Pressure: Regular yoga practices lower blood pressure.
  • Meditation & Stress Reduction: Daily meditation reduces stress levels.
  • Social Media & Anxiety: Excessive social media use increases anxiety in teenagers.
  • Alcohol & Liver Damage: Regular heavy drinking leads to liver damage.
  • Training & Job Efficiency: Intensive training improves job performance.
  • Seat Belts & Accident Survival: Using seat belts increases chances of surviving car accidents.
  • Soft Drinks & Bone Density: High consumption of soft drinks decreases bone density.
  • Homework & Academic Performance: Regular homework completion improves academic scores.
  • Organic Food & Health Benefits: Consuming organic food improves overall health.
  • Fiber Intake & Digestion: Increased dietary fiber enhances digestion.
  • Therapy & Depression Recovery: Regular therapy sessions improve depression recovery rates.
  • Financial Education & Savings: Financial literacy education increases personal saving rates.
  • Brushing & Dental Health: Brushing teeth twice a day reduces dental issues.
  • Carbon Emission & Global Warming: Higher carbon emissions accelerate global warming.
  • Afforestation & Climate Stability: Planting trees stabilizes local climates.
  • Ad Exposure & Sales: Increased product advertisement boosts sales.
  • Parental Involvement & Academic Success: Higher parental involvement enhances student academic performance.
  • Hydration & Skin Health: Regular water intake improves skin elasticity and health.
  • Caffeine & Alertness: Consuming caffeine increases alertness levels.
  • Antibiotics & Bacterial Resistance: Overuse of antibiotics leads to increased antibiotic-resistant bacteria.
  • Pet Ownership & Loneliness: Having pets reduces feelings of loneliness.
  • Fish Oil & Cognitive Function: Regular consumption of fish oil improves cognitive functions.
  • Noise Pollution & Sleep Quality: High levels of noise pollution degrade sleep quality.
  • Exercise & Bone Density: Weight-bearing exercises increase bone density.
  • Vaccination & Disease Prevention: Proper vaccination reduces the incidence of related diseases.
  • Laughter & Immune System: Regular laughter boosts the immune system.
  • Gardening & Stress Reduction: Engaging in gardening activities reduces stress levels.
  • Travel & Cultural Awareness: Frequent travel increases cultural awareness and tolerance.
  • High Heels & Back Pain: Prolonged wearing of high heels leads to increased back pain.
  • Junk Food & Heart Disease: Excessive junk food consumption increases the risk of heart diseases.
  • Mindfulness & Anxiety Reduction: Practicing mindfulness lowers anxiety levels.
  • Online Learning & Flexibility: Online education offers greater flexibility to learners.
  • Urbanization & Wildlife Displacement: Rapid urbanization leads to displacement of local wildlife.
  • Vitamin C & Cold Recovery: High doses of vitamin C speed up cold recovery.
  • Team Building Activities & Work Cohesion: Regular team-building activities improve workplace cohesion.
  • Multitasking & Productivity: Multitasking reduces individual task efficiency.
  • Protein Intake & Muscle Growth: Increased protein consumption boosts muscle growth in individuals engaged in strength training.
  • Mentoring & Career Progression: Having a mentor accelerates career progression.
  • Fast Food & Obesity Rates: High consumption of fast food leads to increased obesity rates.
  • Deforestation & Biodiversity Loss: Accelerated deforestation results in significant biodiversity loss.
  • Language Learning & Cognitive Flexibility: Learning a second language enhances cognitive flexibility.
  • Red Wine & Heart Health: Moderate red wine consumption may benefit heart health.
  • Public Speaking Practice & Confidence: Regular public speaking practice boosts confidence.
  • Fasting & Metabolism: Intermittent fasting can rev up metabolism.
  • Plastic Usage & Ocean Pollution: Excessive use of plastics leads to increased ocean pollution.
  • Peer Tutoring & Academic Retention: Peer tutoring improves academic retention rates.
  • Mobile Usage & Sleep Patterns: Excessive mobile phone use before bed disrupts sleep patterns.
  • Green Spaces & Mental Well-being: Living near green spaces enhances mental well-being.
  • Organic Foods & Health Outcomes: Consuming organic foods leads to better health outcomes.
  • Art Exposure & Creativity: Regular exposure to art boosts creativity.
  • Gaming & Hand-Eye Coordination: Engaging in video games improves hand-eye coordination.
  • Prenatal Music & Baby’s Development: Exposing babies to music in the womb enhances their auditory development.
  • Dark Chocolate & Mood Enhancement: Consuming dark chocolate can elevate mood.
  • Urban Farms & Community Engagement: Establishing urban farms promotes community engagement.
  • Reading Fiction & Empathy Levels: Reading fiction regularly increases empathy.
  • Aerobic Exercise & Memory: Engaging in aerobic exercises sharpens memory.
  • Meditation & Blood Pressure: Regular meditation can reduce blood pressure.
  • Classical Music & Plant Growth: Plants exposed to classical music show improved growth.
  • Pollution & Respiratory Diseases: Higher pollution levels increase respiratory diseases’ incidence.
  • Parental Involvement & Child’s Academic Success: Direct parental involvement in schooling enhances children’s academic success.
  • Sugar Intake & Tooth Decay: High sugar intake is directly proportional to tooth decay.
  • Physical Books & Reading Comprehension: Reading physical books improves comprehension better than digital mediums.
  • Daily Journaling & Self-awareness: Maintaining a daily journal enhances self-awareness.
  • Robotics Learning & Problem-solving Skills: Engaging in robotics learning fosters problem-solving skills in students.
  • Forest Bathing & Stress Relief: Immersion in forest environments (forest bathing) reduces stress levels.
  • Reusable Bags & Environmental Impact: Using reusable bags reduces environmental pollution.
  • Affirmations & Self-esteem: Regularly reciting positive affirmations enhances self-esteem.
  • Local Produce Consumption & Community Economy: Buying and consuming local produce boosts the local economy.
  • Sunlight Exposure & Vitamin D Levels: Regular sunlight exposure enhances Vitamin D levels in the body.
  • Group Study & Learning Enhancement: Group studies can enhance learning compared to individual studies.
  • Active Commuting & Fitness Levels: Commuting by walking or cycling improves overall fitness.
  • Foreign Film Watching & Cultural Understanding: Watching foreign films increases understanding and appreciation of different cultures.
  • Craft Activities & Fine Motor Skills: Engaging in craft activities enhances fine motor skills.
  • Listening to Podcasts & Knowledge Expansion: Regularly listening to educational podcasts broadens one’s knowledge base.
  • Outdoor Play & Child’s Physical Development: Encouraging outdoor play accelerates physical development in children.
  • Thrift Shopping & Sustainable Living: Choosing thrift shopping promotes sustainable consumption habits.
  • Nature Retreats & Burnout Recovery: Taking nature retreats aids in burnout recovery.
  • Virtual Reality Training & Skill Acquisition: Using virtual reality for training accelerates skill acquisition in medical students.
  • Pet Ownership & Loneliness Reduction: Owning a pet significantly reduces feelings of loneliness among elderly individuals.
  • Intermittent Fasting & Metabolism Boost: Practicing intermittent fasting can lead to an increase in metabolic rate.
  • Bilingual Education & Cognitive Flexibility: Being educated in a bilingual environment improves cognitive flexibility in children.
  • Urbanization & Loss of Biodiversity: Rapid urbanization contributes to a loss of biodiversity in the surrounding environment.
  • Recycled Materials & Carbon Footprint Reduction: Utilizing recycled materials in production processes reduces a company’s overall carbon footprint.
  • Artificial Sweeteners & Appetite Increase: Consuming artificial sweeteners might lead to an increase in appetite.
  • Green Roofs & Urban Temperature Regulation: Implementing green roofs in urban buildings contributes to moderating city temperatures.
  • Remote Work & Employee Productivity: Adopting a remote work model can boost employee productivity and job satisfaction.
  • Sensory Play & Child Development: Incorporating sensory play in early childhood education supports holistic child development.

Causal Hypothesis Statement Examples in Research

Research hypothesis often delves into understanding the cause-and-effect relationships between different variables. These causal hypotheses attempt to predict a specific effect if a particular cause is present, making them vital for experimental designs.

  • Artificial Intelligence & Job Market: Implementation of artificial intelligence in industries causes a decline in manual jobs.
  • Online Learning Platforms & Traditional Classroom Efficiency: The introduction of online learning platforms reduces the efficacy of traditional classroom teaching methods.
  • Nano-technology & Medical Treatment Efficacy: Using nano-technology in drug delivery enhances the effectiveness of medical treatments.
  • Genetic Editing & Lifespan: Advancements in genetic editing techniques directly influence the lifespan of organisms.
  • Quantum Computing & Data Security: The rise of quantum computing threatens the security of traditional encryption methods.
  • Space Tourism & Aerospace Advancements: The demand for space tourism propels advancements in aerospace engineering.
  • E-commerce & Retail Business Model: The surge in e-commerce platforms leads to a decline in the traditional retail business model.
  • VR in Real Estate & Buyer Decisions: Using virtual reality in real estate presentations influences buyer decisions more than traditional methods.
  • Biofuels & Greenhouse Gas Emissions: Increasing biofuel production directly reduces greenhouse gas emissions.
  • Crowdfunding & Entrepreneurial Success: The availability of crowdfunding platforms boosts the success rate of start-up enterprises.

Causal Hypothesis Statement Examples in Epidemiology

Epidemiology is a study of how and why certain diseases occur in particular populations. Causal hypotheses in this field aim to uncover relationships between health interventions, behaviors, and health outcomes.

  • Vaccine Introduction & Disease Eradication: The introduction of new vaccines directly leads to the reduction or eradication of specific diseases.
  • Urbanization & Rise in Respiratory Diseases: Increased urbanization causes a surge in respiratory diseases due to pollution.
  • Processed Foods & Obesity Epidemic: The consumption of processed foods is directly linked to the rising obesity epidemic.
  • Sanitation Measures & Cholera Outbreaks: Implementing proper sanitation measures reduces the incidence of cholera outbreaks.
  • Tobacco Consumption & Lung Cancer: Prolonged tobacco consumption is the primary cause of lung cancer among adults.
  • Antibiotic Misuse & Antibiotic-Resistant Strains: Misuse of antibiotics leads to the evolution of antibiotic-resistant bacterial strains.
  • Alcohol Consumption & Liver Diseases: Excessive and regular alcohol consumption is a leading cause of liver diseases.
  • Vitamin D & Rickets in Children: A deficiency in vitamin D is the primary cause of rickets in children.
  • Airborne Pollutants & Asthma Attacks: Exposure to airborne pollutants directly triggers asthma attacks in susceptible individuals.
  • Sedentary Lifestyle & Cardiovascular Diseases: Leading a sedentary lifestyle is a significant risk factor for cardiovascular diseases.

Causal Hypothesis Statement Examples in Psychology

In psychology, causal hypotheses explore how certain behaviors, conditions, or interventions might influence mental and emotional outcomes. These hypotheses help in deciphering the intricate web of human behavior and cognition.

  • Childhood Trauma & Personality Disorders: Experiencing trauma during childhood increases the risk of developing personality disorders in adulthood.
  • Positive Reinforcement & Skill Acquisition: The use of positive reinforcement accelerates skill acquisition in children.
  • Sleep Deprivation & Cognitive Performance: Lack of adequate sleep impairs cognitive performance in adults.
  • Social Isolation & Depression: Prolonged social isolation is a significant cause of depression among teenagers.
  • Mindfulness Meditation & Stress Reduction: Regular practice of mindfulness meditation reduces symptoms of stress and anxiety.
  • Peer Pressure & Adolescent Risk Taking: Peer pressure significantly increases risk-taking behaviors among adolescents.
  • Parenting Styles & Child’s Self-esteem: Authoritarian parenting styles negatively impact a child’s self-esteem.
  • Multitasking & Attention Span: Engaging in multitasking frequently leads to a reduced attention span.
  • Childhood Bullying & Adult PTSD: Individuals bullied during childhood have a higher likelihood of developing PTSD as adults.
  • Digital Screen Time & Child Development: Excessive digital screen time impairs cognitive and social development in children.

Causal Inference Hypothesis Statement Examples

Causal inference is about deducing the cause-effect relationship between two variables after considering potential confounders. These hypotheses aim to find direct relationships even when other influencing factors are present.

  • Dietary Habits & Chronic Illnesses: Even when considering genetic factors, unhealthy dietary habits increase the chances of chronic illnesses.
  • Exercise & Mental Well-being: When accounting for daily stressors, regular exercise improves mental well-being.
  • Job Satisfaction & Employee Turnover: Even when considering market conditions, job satisfaction inversely relates to employee turnover.
  • Financial Literacy & Savings Behavior: When considering income levels, financial literacy is directly linked to better savings behavior.
  • Online Reviews & Product Sales: Even accounting for advertising spends, positive online reviews boost product sales.
  • Prenatal Care & Child Health Outcomes: When considering genetic factors, adequate prenatal care ensures better health outcomes for children.
  • Teacher Qualifications & Student Performance: Accounting for socio-economic factors, teacher qualifications directly influence student performance.
  • Community Engagement & Crime Rates: When considering economic conditions, higher community engagement leads to lower crime rates.
  • Eco-friendly Practices & Brand Loyalty: Accounting for product quality, eco-friendly business practices boost brand loyalty.
  • Mental Health Support & Workplace Productivity: Even when considering workload, providing mental health support enhances workplace productivity.

What are the Characteristics of Causal Hypothesis

Causal hypotheses are foundational in many research disciplines, as they predict a cause-and-effect relationship between variables. Their unique characteristics include:

  • Cause-and-Effect Relationship: The core of a causal hypothesis is to establish a direct relationship, indicating that one variable (the cause) will bring about a change in another variable (the effect).
  • Testability: They are formulated in a manner that allows them to be empirically tested using appropriate experimental or observational methods.
  • Specificity: Causal hypotheses should be specific, delineating clear cause and effect variables.
  • Directionality: They typically demonstrate a clear direction in which the cause leads to the effect.
  • Operational Definitions: They often use operational definitions, which specify the procedures used to measure or manipulate variables.
  • Temporal Precedence: The cause (independent variable) always precedes the effect (dependent variable) in time.

What is a causal hypothesis in research?

In research, a causal hypothesis is a statement about the expected relationship between variables, or explanation of an occurrence, that is clear, specific, testable, and falsifiable. It suggests a relationship in which a change in one variable is the direct cause of a change in another variable. For instance, “A higher intake of Vitamin C reduces the risk of common cold.” Here, Vitamin C intake is the independent variable, and the risk of common cold is the dependent variable.

What is the difference between causal and descriptive hypothesis?

  • Causal Hypothesis: Predicts a cause-and-effect relationship between two or more variables.
  • Descriptive Hypothesis: Describes an occurrence, detailing the characteristics or form of a particular phenomenon.
  • Causal: Consuming too much sugar can lead to diabetes.
  • Descriptive: 60% of adults in the city exercise at least thrice a week.
  • Causal: To establish a causal connection between variables.
  • Descriptive: To give an accurate portrayal of the situation or fact.
  • Causal: Often involves experiments.
  • Descriptive: Often involves surveys or observational studies.

How do you write a Causal Hypothesis? – A Step by Step Guide

  • Identify Your Variables: Pinpoint the cause (independent variable) and the effect (dependent variable). For instance, in studying the relationship between smoking and lung health, smoking is the independent variable while lung health is the dependent variable.
  • State the Relationship: Clearly define how one variable affects another. Does an increase in the independent variable lead to an increase or decrease in the dependent variable?
  • Be Specific: Avoid vague terms. Instead of saying “improved health,” specify the type of improvement like “reduced risk of cardiovascular diseases.”
  • Use Operational Definitions: Clearly define any terms or variables in your hypothesis. For instance, define what you mean by “regular exercise” or “high sugar intake.”
  • Ensure It’s Testable: Your hypothesis should be structured so that it can be disproven or supported by data.
  • Review Existing Literature: Check previous research to ensure that your hypothesis hasn’t already been tested, and to ensure it’s plausible based on existing knowledge.
  • Draft Your Hypothesis: Combine all the above steps to write a clear, concise hypothesis. For instance: “Regular exercise (defined as 150 minutes of moderate exercise per week) decreases the risk of cardiovascular diseases.”

Tips for Writing Causal Hypothesis

  • Simplicity is Key: The clearer and more concise your hypothesis, the easier it will be to test.
  • Avoid Absolutes: Using words like “all” or “always” can be problematic. Few things are universally true.
  • Seek Feedback: Before finalizing your hypothesis, get feedback from peers or mentors.
  • Stay Objective: Base your hypothesis on existing literature and knowledge, not on personal beliefs or biases.
  • Revise as Needed: As you delve deeper into your research, you may find the need to refine your hypothesis for clarity or specificity.
  • Falsifiability: Always ensure your hypothesis can be proven wrong. If it can’t be disproven, it can’t be validated either.
  • Avoid Circular Reasoning: Ensure that your hypothesis doesn’t assume what it’s trying to prove. For example, “People who are happy have a positive outlook on life” is a circular statement.
  • Specify Direction: In causal hypotheses, indicating the direction of the relationship can be beneficial, such as “increases,” “decreases,” or “leads to.”

Twitter

AI Generator

Text prompt

  • Instructive
  • Professional

10 Examples of Public speaking

20 Examples of Gas lighting

IMAGES

  1. 13 Different Types of Hypothesis (2024)

    define causal hypothesis in psychology

  2. Causal hypothesis Figure 7: Intervention hypothesis

    define causal hypothesis in psychology

  3. Causal Hypothesis

    define causal hypothesis in psychology

  4. What is a Hypothesis

    define causal hypothesis in psychology

  5. Hypothesis: Definition, Sources, Uses, Characteristics and Examples

    define causal hypothesis in psychology

  6. 09 Hypotheses

    define causal hypothesis in psychology

VIDEO

  1. Hypothesis Testing

  2. What are Causal Graphs?

  3. Difference between Hypothesis and Theory

  4. Hypothesis || Part 16 || By Sunil Tailor Sir ||

  5. M.A. Psychology: Difference between causal comparative and experimental research design

  6. Deprivation hypothesis explains why more is less

COMMENTS

  1. Research Hypothesis In Psychology: Types, & Examples

    Examples. A research hypothesis, in its plural form "hypotheses," is a specific, testable prediction about the anticipated results of a study, established at its outset. It is a key component of the scientific method. Hypotheses connect theory to data and guide the research process towards expanding scientific understanding.

  2. Chapter nineteen

    The most widely used noncausal analysis in social-personality psychology is exploratory factor analysis (EFA). There are two major issues that should be taken into account when designing studies to be analyzed using EFA: selection of measured variables and selection of sample. The chapter overviews the major types of causal hypotheses.

  3. Causal and associative hypotheses in psychology: Examples from

    Two types of hypotheses interest psychologists: causal hypotheses and associative hypotheses. The conclusions that can be reached from studies examining these hypotheses and the methods that should be used to investigate them differ. Causal hypotheses examine how a manipulation affects future events, whereas associative hypotheses examine how often certain events co-occur. In general ...

  4. PDF CHAPTER 12 Causal Learning

    Abstract. This chapter is an introduction to the psychology of causal inference using a computational perspective, with the focus on causal discovery. It explains the nature of the problem of causal discovery and illustrates the goal of the process with everyday and hypothetical examples. It reviews psychological research under two approaches ...

  5. Causal and associative hypotheses in psychology: Examples from

    Two types of hypotheses interest psychologists: causal hypotheses and associative hypotheses. The conclusions that can be reached from studies examining these hypotheses and the methods that ...

  6. What Is Causal Cognition?

    The preamble for this research topic outlines causal cognition as the ability "to perceive and reason about […] cause-effect relations." 1 This outline largely reflects what may be seen as the "standard view" in cognitive and social psychology. In the following, this view will be fleshed out, before addressing the dimensions along ...

  7. Causal Explanation

    This chapter considers what we can learn about causal reasoning from research on explanation. In particular, it reviews an emerging body of work suggesting that explanatory considerations—such as the simplicity or scope of a causal hypothesis—can systematically influence causal inference and learning. It also discusses proposed distinctions ...

  8. Theories of causation in psychological science.

    It plays a key role in psychological theory, a role that is made salient by the emphasis on experimentation in the training of graduate students and in the execution of much basic and applied psychological research. For decades, many psychologists have relied on the work of Donald Campbell to help guide their thinking about causal inference.

  9. 29 Causation and Explanation

    The causal model of explanation has considerable attractions. Both science and ordinary life are filled with causal explanations, and the causes we cite seem explanatory precisely because they are causes. Indeed it appears that requests for explanation, why‐questions, can often be paraphrased as what‐is‐the‐cause‐of questions.

  10. Causality in Thought

    Causal knowledge plays a crucial role in human thought, but the nature of causal representation and inference remains a puzzle. Can human causal inference be captured by relations of probabilistic dependency, or does it draw on richer forms of representation? This article explores this question by reviewing research in reasoning, decision making, various forms of judgment, and attribution.

  11. 21 The Psychology of Causal Perception and Reasoning

    Causal cognition can be divided into two rough categories: causal learning (sects. 2-4) and causal reasoning ().The former encompasses the processes by which we learn about causal relations in the world at both the type and token levels; the latter refers to the ways in which we use those causal beliefs to make further inferences, decisions, predictions, and so on.

  12. Introduction to Fundamental Concepts in Causal Inference

    Causal inference refers to the design and analysis of data for uncovering causal relationships between treatment/intervention variables and outcome variables. We care about causal inference because a large proportion of real-life questions of interest are questions of causality, not correlation. Causality has been of concern since the dawn of ...

  13. Introducing Causality in Psychology

    The causal graph modeling approach (Pearl, 2000, 2009) is an influential one in psychology. Its concept of interventions and "graph surgery" as indicative of causal relations is an important one. (a) However, it is not the only graphical model that has gained currency in psychology.

  14. Developing a Hypothesis

    Theories and Hypotheses. Before describing how to develop a hypothesis, it is important to distinguish between a theory and a hypothesis. A theory is a coherent explanation or interpretation of one or more phenomena. Although theories can take a variety of forms, one thing they have in common is that they go beyond the phenomena they explain by including variables, structures, processes ...

  15. Causal vs. Directional Hypothesis

    Sam's second hypothesis is a causal hypothesis, because it signifies a cause-and-effect relationship. Whereas a relational hypothesis can be non-directional, causal hypotheses are always directional.

  16. Causal Theories of Mental Content

    1. Introduction. Content is what is said, asserted, thought, believed, desired, hoped for, etc. Mental content is the content had by mental states and processes. Causal theories of mental content attempt to explain what gives thoughts, beliefs, desires, and so forth their contents. They attempt to explain how thoughts can be about things.

  17. The Psychology of Causality

    The Psychology of Causality. Summary: A new mathematical model of causal judgment reproduces people's intuition better than previous models. Like a parent being pestered with endless questions from a young child, most people will now and again find themselves following an infinite chain of cause and effect when considering what led to some ...

  18. PDF Causal explanation

    Causal explanation. Bradford Skow For the Routledge Encyclopedia of Philosophy www.rep.routledge.com. Article summary. An explanation is an answer to a why-question, and so a causal explanation is an answer to "Why X?" that says something about the causes of X. For example, "Because it rained," as an answer to "Why is the ground wet ...

  19. PDF What Causal Illusions Might Tell us about the Identification of Causes

    of causal inference, in particular, a consideration of possible mechanism. Consistent with this hypothesis, we found that in response to a causal illusion shown in a naturalistic setting, people's initial judgments of causation were higher than their ultimate judgments of causation (Experiment 1). Using an

  20. The Oxford Handbook of Causal Reasoning

    Abstract. This chapter reviews empirical and theoretical results concerning knowledge of causal mechanisms—beliefs about how and why events are causally linked. First, it reviews the effects of mechanism knowledge, showing that mechanism knowledge can override other cues to causality (including covariation evidence and temporal cues) and ...

  21. APA Dictionary of Psychology

    n. the empirical relation between two events, states, or variables such that change in one (the cause) brings about change in the other (the effect). See also causality. in Aristotelian and rationalist philosophy, the hypothetical relation between two phenomena (entities or events), such that one (the cause) either constitutes the necessary and ...

  22. Causal Hypothesis

    A causal hypothesis is a statement that predicts a cause-and-effect relationship between variables in a study. ... Causal Hypothesis Statement Examples in Psychology. In psychology, causal hypotheses explore how certain behaviors, conditions, or interventions might influence mental and emotional outcomes. ... Clearly define any terms or ...

  23. Step 3: Develop an explanatory hypothesis.

    The explanatory hypothesis is the heart of the case formulation. It describes why the individual is having problems. Ideally, the explanation contains a cohesive and cogent understanding of the origins of the problems, the conditions that perpetuate them, the obstacles interfering with their solution, and the resources available to address them. An abundance of theories and empirical research ...