What Dilemma? Moral Evaluation Shapes Factual Belief

Gathered together in one place, for easy access, an agglomeration of writings and images relevant to the Rapeutation phenomenon.

What Dilemma? Moral Evaluation Shapes Factual Belief

Postby admin » Tue Mar 10, 2015 9:49 am

by Brittany Liu and Peter H. Ditto
University of California, Irvine



SUBMITTED FOR REVIEW AT Social Psychological and Personality Science


Moral dilemmas—like the “trolley problem” or real world examples like capital punishment—result from a conflict between consequentialist and deontological intuitions (i.e., whether ends justify means). We contend that people often resolve such moral conflict by aligning factual beliefs about consequences of acts with evaluations of the act’s inherent morality (i.e., morality independent of its consequences). In both artificial (Study 1) and real world (Study 2) dilemmas, the more an act was deemed inherently immoral, the more it was seen as unlikely to produce beneficial consequences and likely to involve harmful costs. Coherence between moral evaluations and factual beliefs increased with greater moral conviction, self-proclaimed topical knowledge, and political conservatism (Study 2). Reading essays about the inherent morality or immorality of capital punishment (Study 3) changed beliefs about its costs and benefits, even though no information about consequences was supplied. Implications for moral reasoning and political conflict are discussed.

What Dilemma? Moral Evaluation Shapes Factual Belief

Psychologists and philosophers have long been fascinated by moral dilemmas, such as Kohlberg’s (1969) famous Heinz story, in which a husband must choose whether to steal an overpriced drug to save his wife’s life, or more recently the “trolley problem,” in which the morality of redirecting a runaway train to kill one individual rather than five must be evaluated (Foot, 1967; Greene, Nystrom, Engell, Darley, & Cohen, 2004). These dilemmas place people in difficult moral predicaments, requiring them to weigh whether accomplishing a moral end (curing one’s wife or saving five lives) can justify ostensibly immoral means (committing a robbery or taking a single life).

Artificial moral dilemmas provide insight into how people think about real moral controversies such as capital punishment, in which the taking of one life must be balanced against the potential benefit of deterring future crime, or embryonic stem cell research, in which the morality of destroying potential human life is weighed against the possibility of discovering medical treatments that could save many actual lives. An interesting aspect of real world moral dilemmas, however, is that the conflict involved often seems more interpersonal than intrapersonal. That is, while ethicists and politicians debate whether the ends of capital punishment or stem cell research justify the means, most individuals seem not to experience their personal position on these issues as particularly conflicted. Instead, it seems that people who believe that capital punishment is inherently immoral also usually contend that it is ineffective in deterring crime, and those who believe that stem cell research is morally reprehensible almost always doubt its likelihood of producing future medical breakthroughs.

We report three studies documenting this coordination between moral evaluation and factual belief, and argue that people minimize the psychological conflict inherent in moral dilemmas by aligning their prescriptive evaluations of the morality of acts with their descriptive beliefs about the act’s potential consequences. The studies reveal how people making moral arguments can enjoy the best of both worlds—touting their moral imperatives while believing that the cost-benefit analysis is on their side as well.

Motivated Consequentialism

Moral dilemmas arise from dissonant intuitions about morally appropriate responses, often pitting consequentialist intuitions against deontological ones. The essence of consequentialism, glossing over its many philosophical variants, is that acts are moral to the extent that they maximize positive consequences (i.e., ends justify means). In a sense, consequentialism is a “rational” form of moral evaluation in which an act’s morality is determined, much like any other kind of economic decision, by an analysis of its costs and benefits. A deontological moral stance, alternatively, holds that while the consequences of actions are important, there are constraints on action independent of consequences such that some acts are inherently wrong in and of themselves (i.e., ends cannot justify some means). This notion of the “sacred”—that certain acts are “protected” from normal cost-benefit valuations—is seen by many as the essence of moral thinking (Baron & Spranca, 1997; Tetlock, 2003), and its conflict with highly routinized economic intuitions is the key dynamic underlying classic moral dilemmas. The trolley problem, for example, creates a moral dilemma because individuals are torn between feeling that killing an innocent individual for any reason is inherently wrong (a deontological intuition) and that killing one individual to save five makes good economic (and therefore moral) sense (a consequentialist intuition). Real world dilemmas like capital punishment, embryonic stem cell research, and forceful interrogation of terrorist suspects, all similarly involve a no-win choice between endorsing a morally distasteful act, and rejecting that act and with it the compelling logic of a favorable cost-benefit analysis.

In the moral reasoning literature, there is a clear implicit assumption that individuals confronting moral dilemmas struggle their way to either a consequentialist or deontological conclusion, and then simply live with the unavoidable downside of their either-or decision (cf. Greene et al., 2004). But over a half century of psychological research suggests that cognitive conflict of this type is unstable (Cooper, 2007; Read, Vanman, & Miller, 1997). People should feel pressure to minimize the dissonance evoked by moral dilemmas, and this may encourage post-hoc reasoning processes that shape descriptive beliefs about the costs and benefits of acts in ways that comport with deontological morality. For example, many political conservatives believe that promoting condom use to teenagers is inherently wrong. This deontological intuition conflicts with consequentialist sensibilities, however, if one also believes that condoms are effective at preventing pregnancy and sexually transmitted disease (STDs). Individuals can resolve this conflict by becoming unskeptical consumers of information that disparages the benefits of condom use (e.g., their prophylactic effectiveness) or enhances its costs (e.g., encouragement of promiscuous sex). A political liberal with few or no moral qualms about teenage condom use, on the other hand, should be relatively inclined to believe information that highlights condoms’ benefits and/or minimizes their costs. Analogously, liberals who feel moral disgust toward the death penalty should be prone to believe information emphasizing its ineffectiveness at deterring future crime or the risks of wrongful execution, while individuals with more favorable opinions about the justness of capital punishment should trust information underscoring its deterrent efficacy or that minimizes the likelihood of wrongful executions. This type of motivated cost-benefit analysis would incline people toward coherent, conflict-free moral beliefs in which the act that feels the best morally is also the act that produces the most favorable practical consequences.

Current Studies

Others have posited that values can shape factual beliefs (e.g., Baron & Spranca, 1997; Kahan, 2010). Kahan, for example, has examined how cultural values, such as those concerning the equitable distribution of goods (individualism vs. egalitarianism), are associated with risk-related beliefs. In one study, participants least likely to support mandatory human papillomavirus vaccinations (individualists) also thought vaccination was unlikely to reduce rates of cervical cancer and likely to encourage vaccinated females to have unprotected sex (Kahan, Braman, Slovic, Gastil, & Cohen, 2007).

Our studies build on this past research in two ways. First, examining the value-fact link in the context of moral dilemmas challenges prevailing “hydraulic” conceptions of consequentialism and deontology as distinct paths of reasoning that produce divergent moral preferences (e.g., Greene et al., 2004). Instead, the current view predicts that consequentialist and deontological rationales will often be more complementary than hydraulic, and in particular that individuals will often construct a “consequentialist crutch” (Ditto & Liu, 2011) to support what have typically been taken as deontologically-based moral stands. This complementary relation flows seamlessly from an intuitionist view of moral judgment (Haidt, 2001) in which individuals are thought to justify gut moral reactions post-hoc rather than reason their way to moral conclusions (Ditto, Pizarro, & Tannenbaum, 2009).

Second, no study to date has manipulated moral evaluations and shown them to shape cost-benefit beliefs. After two studies demonstrating the complementary nature of consequentialist and deontological reasoning in both artificial and real-world moral dilemmas, a final experimental study examines the causal link between moral evaluation and factual belief.

Study 1

Researchers note that many participants reject assumptions embedded in artificial moral dilemmas (e.g., that a man can be large enough to stop a trolley; Greene et al., 2009). This phenomenon is generally treated as a methodological nuisance, but the current study embraces it as a dependent variable, and examines whether individuals who judge an act as deontologically prohibited will also see it as low in potential benefits and high in potential costs.

Method [1]

Undergraduate students (N=123, 79% female) read a version of the classic footbridge dilemma in which a group of workmen can be saved from a runaway trolley by pushing a large stranger onto the tracks. Participants were asked how many workmen would need to be saved to justify pushing the man using a 9-point scale ranging from “at least two” to “I would never push the man no matter how many lives would be saved.” We avoided language from past studies claiming certainties between actions and consequences, instead using probabilistic language (e.g., “his large body will likely stop the trolley”) in order to examine perceptions of these likelihoods.

Two questions (r=.39) operationalized the act’s perceived likelihood of producing beneficial consequences (e.g., the large stranger’s body will stop the trolley, 1=very unlikely, 7=very likely). One question measured the act’s perceived costs (how much pain the stranger would feel if pushed onto the tracks; 1=no pain, 7=severe pain).


Eighty percent of participants said no trade-off in saved lives could justify pushing the man onto the tracks, allowing us to compare individuals who gave a fully deontological response to those who endorsed some level of consequentialist trade-off. Participants who said they would never push the large stranger onto the tracks believed the act was significantly less likely to be effective at saving the workmen (M=3.02, SD=1.33) and would cause significantly more pain (M=6.79, SD=0.80) than did those who believed it was justified to push the man to save some amount of lives2 (Mbenefit=4.58, SD=1.46; Mpain=5.44, SD=1.64), ts≥4.0, ps<.001; rs≥.42).


Study 1 supports the notion that people align moral evaluations of an act with beliefs about its consequences. In a scenario used frequently in the literature, participants apparently taking a principled moral stand—asserting that an act that would sacrifice one life to save many was nonetheless morally unacceptable—also viewed that act as less likely to actually save those lives, and more likely to cause pain to the individual being sacrificed, compared to individuals endorsing a more consequentialist moral position.

The data have two obvious limitations. First, it is difficult to interpret hypothetical responses to bizarre moral predicaments. Second, the correlational nature of the data make it unclear whether moral judgments shaped cost-benefit beliefs, or whether participants’ prescriptive judgments were simply based on consequentialist logic (e.g., sacrificing the stranger was judged morally wrong because it was seen as unlikely to save the workmen). But it is important to consider the conceptual implications of this latter causal account. In past research, refusing to sacrifice the stranger would be characterized as a deontological judgment. But if a cost-benefit calculation underlies the decision that sacrificing the man is immoral, it seems odd to characterize that judgment as deontological.

Study 2

Study 2 sought to replicate Study 1’s findings in judgments about four real world moral controversies frequently discussed in contemporary American politics.


Participants were 1806 adults visiting the website yourmorals.org. Participants spending less than one or more than 15 minutes on a single page were excluded, resulting in a final sample of 1567 (66% male, Mage=37.1).

Four moral issues were presented in random order. Ideological balance was sought by selecting two issues more often seen as morally acceptable by political conservatives than liberals (forceful interrogation of terrorist suspects, capital punishment), and two issues more often seen as morally acceptable by political liberals than conservatives (condom promotion in sex education, embryonic stem cell research). For each issue, participants used 7-point scales to first answer a general morality question (e.g., whether forceful interrogation is morally wrong in most or all cases) and then a deontological morality question (e.g., whether forceful interrogation is wrong “even if it is effective in getting suspects to talk”). The first question was not analyzed, but was included to clarify that the second question asked for assessments of the inherent morality of the action (independent of consequences). The perceived likelihood of beneficial consequences was measured with at least three questions per issue (e.g., whether forceful interrogation produces valid intelligence; whether encouraging condom use reduces teen pregnancy and STDs). Perceived costs of the action were measured with at least two questions per issue (e.g., whether capital punishment results in wrongful executions; whether stem cell research encourages pregnancy for profit). Indexes were created within issues (α’s≥.64) with higher values reflecting greater perceived benefits and costs.

Participants indicated on 7-point scales their moral conviction toward each issue (Skitka, Bauman, & Sargis, 2005), how informed they were on each issue, and their political ideology (1=very liberal, 7=very conservative).


Multiple regression analyses strongly supported predictions for all four issues. For each issue, deontological morality significantly predicted both benefit and cost-related beliefs controlling for gender, feeling informed about and morally committed to the issue, and political conservatism (Table 1). The more participants believed that the action was immoral even if it had beneficial consequences, the less they believed it would actually produce those consequences (ßs≤-.32) and the more they believed it would have undesirable costs (ßs≥.29). Illustrating with condom promotion, the more participants endorsed the belief that condom education was morally wrong even if it prevented pregnancy and STDs, the less they believed that condoms were effective at preventing these problems, and the more they believed that promoting condom use encouraged teenagers to have sex.

To examine factors that might moderate the observed coordination between moral and factual belief, additional regression analyses were conducted identical to the analyses above, but including the interaction effect between deontological morality and one of the four control variables as an additional predictor. The criterion variable was a combined index of the cost and benefit items for that issue (αs≥.79). Table 2 show a consistent pattern of significant interaction effects for moral conviction (3/4 issues), feeling informed about the issue (3/4 issues), and political conservatism (4/4 issues). Specifically, the tendency to perceive morally distasteful acts as also being practically disadvantageous was significantly more pronounced for individuals who were morally convicted about the issue, for individuals who felt highly informed about the issue, and for political conservatives. Simple slopes analyses showed that associations between moral and factual beliefs were still significant for participants low on moral conviction and informedness (based on two standard deviations below the mean) and political liberals (bs≥.08, ts≥3.18, ps≤.002).


Study 2 replicated the results of Study 1 across four real moral dilemmas, again revealing a pattern in which descriptive cost-benefit beliefs were well coordinated with prescriptive moral opinions. Theoretically, deontological moral beliefs are independent of consequences, but our data consistently show that evaluations of an act’s inherent morality are strongly associated with factual beliefs about both its positive and negative consequences. Although the data are again correlational, this is the pattern one would expect if people felt psychological pressure to reinforce their moral evaluations with consequentialist logic.

The tighter connection between moral evaluation and factual belief with increasing moral conviction suggests that moral motivations are a unique and important contributor to the effect. Research has shown that positions held with moral conviction are often experienced as objective, self-evident truths (Goodwin & Darley, 2008; Skitka et al., 2005; Wright, Cullum, & Schwab, 2008), and part of this phenomenon may be the tendency to generate both deontological and consequentialist rationales for deeply held moral views. That individuals who feel informed about an issue also show greater moral-factual coordination is reminiscent of an effect found in a number of recent studies suggesting that, contrary to most people’s initial intuitions, ideological biases are more rather than less pronounced with increasing political knowledge and sophistication (Kahan et al., 2011; Taber & Lodge, 2006). Finally, while our political ideology results can be taken as consistent with a body of work associating conservatism with heuristic and motivated thinking (Eidelman, Crandall, Goodman, & Blanchar, 2012; Jost, Kruglanski, Glaser, & Sulloway, 2003; Tetlock, 1983), it is important to also note the modest size of these interaction effects and that significant moral-factual coordination was found across the political spectrum.

Study 3

Study 3 addresses the lingering question of causal influence by manipulating participants’ deontologically-based evaluations of the morality of capital punishment and examining the effects on cost-benefit beliefs.


Undergraduate students (N = 126, 84% female) read descriptions of the four issues from Study 2. For each issue they responded to two deontological morality items (e.g., “It doesn’t matter if the death penalty discourages would-be criminals, it is still morally wrong”), three perceived benefit items (e.g., “There is no credible evidence that the death penalty reduces the rate of future murders”), and three perceived cost items (e.g., “How frequently do you think the death penalty is carried out in the U.S. against someone who is actually innocent?”). Scales ranged from strongly disagree/very infrequently to strongly agree/very frequently and participants responded on a line divided into 80 segments.

Participants were then randomly assigned to read an essay presenting moral arguments either for or against capital punishment. The essays were equated for length (517 and 521 words, respectively) and, most importantly, made purely deontological arguments with no mention of consequentialist costs or benefits. The main points in the pro-capital punishment essay were: 1) justice for murder is most fairly achieved with capital punishment; 2) premeditated murderers are, by choice, subhuman and undeserving of mercy; and 3) favoring capital punishment shows the highest regard for human life. The main points in the anti-capital punishment essay were: 1) capital punishment is barbaric and inhumane; 2) it is wrong to solve violence with further violence; and 3) it is wrong to quantify death by saying some homicides deserve less punishment than others.

After reading the essay, participants re-answered the capital punishment items from the opening questionnaire. Indexes were created for pre- and post-essay responses (α’s≥.71) with higher values indicating greater perceived immorality, benefits, and costs.


Analyses of covariance examined the effect of essay condition on post-essay judgments controlling for pre-essay judgments.

As intended, the essays produced significant differences in moral assessments of capital punishment, F(1,123)=13.08, p<.001, η2 =.10. Participants reading the anti-capital punishment essay judged the death penalty as more deontologically immoral (M=48.52, SD=21.0) than did those reading the pro-capital punishment essay (M=40.76, SD=21.6).

More importantly, exposure to the essays produced the predicted differences in consequentialist beliefs. Participants reading the anti-capital punishment essay expressed significantly weaker beliefs in capital punishment’s deterrent efficacy (M=34.12, SD=15.3) than did participants reading the pro-capital punishment essay (M=38.34, SD=17.3), F(1,123)=23.19, p<.001, η2=.16. Figure 1 shows mean change scores between pre- and post-essay judgments and simple effects tests confirm that participants’ beliefs about the benefits of capital punishment significantly decreased after reading the anti-capital punishment essay (F[1,124]=15.55, p<.001, η2=.11) and significantly increased after reading the pro-capital punishment essay (F[1,124]=10.20, p=.002, η2=.08). The perceived costs index likewise revealed that participants reading the anti-capital punishment essay expressed significantly stronger beliefs that the death penalty had important costs (M=42.38, SD=17.4) than did those reading the pro-capital punishment essay (M=37.84, SD=16.3), F(1,123)=12.75, p=.001, η2=.09. Simple effects tests again showed that participants’ evaluations of capital punishment’s undesirable costs significantly decreased after reading the anti-capital punishment essay (F[1,124]=4.44, p=.037, η2=.04) and significantly increased after reading the pro-capital punishment essay3 (F[1,124]=8.18, p=.005, η2=.06; see Figure 1).

We combined the perceived benefits and costs items (α’s≥.77) to examine whether changes in moral beliefs mediated the pro- and anti-capital punishment essays’ effect on factual beliefs. Bias-corrected bootstrapping was used to test this indirect effect (Preacher & Hayes, 2008). As predicted, pre-essay to post-essay changes in moral beliefs partially mediated the relation between essay condition and change in cost-benefit beliefs (B=1.01, SE=.52, 95% CI [0.29-2.60]), suggesting that change in beliefs about the inherent morality of the death penalty contributed to changes in beliefs about its cost-benefit profile.


Study 3 provides experimental evidence that prescriptive feelings about the morality of capital punishment can shape descriptive beliefs about its consequences. Within a single experimental session, persuasive essays produced small but significant changes in assessments of the inherent morality of the death penalty. These essays also changed beliefs about whether capital punishment deterred future crime or led to miscarriages of justice, even though neither essay made mention of any cost-benefit issues. Changes in cost-benefit beliefs were only partially mediated by changes in moral evaluations, but the findings are nonetheless impressive given the relatively weak essay manipulation and brief time interval between judgments. Past research, including our first two studies, has produced intriguing correlational evidence that moral values can shape factual cost-benefit beliefs. Study 3 provides the most compelling support to date for a causal relation.

General Discussion

Imagine a politician who acknowledges that waterboarding is effective in producing intelligence that can thwart future terrorist attacks, but who nevertheless asserts that the technique is inherently immoral and advocates its abolishment. Most people would find this position morally admirable. Moral stands are inspiring when we see them (Ditto & Mastronarde, 2009; Eagly, Wood, & Chaiken, 1978), but this admiration flows, at least in part, because such stands occur so infrequently.

While individuals can and do appeal to principle in some cases to support their moral positions, we argue that this is a difficult stance psychologically because it conflicts with well-rehearsed economic intuitions urging that the most rational course of action is the one that produces the most favorable cost-benefit ratio. Our research suggests that people resolve such dilemmas by bringing cost-benefit beliefs into line with moral evaluations, such that the right course of action morally becomes the right course of action practically as well. Study 3 provides experimental confirmation of a pattern implied by both our own and others’ correlational research (e.g., Kahan, 2010): People shape their descriptive understanding of the world to fit their prescriptive understanding of it. Our findings contribute to a growing body of research demonstrating that moral evaluations affect non-moral judgments such as assessments of cause (Alicke, 2000; Cushman & Young, 2011) intention (Knobe, 2003, 2010), and control (Young & Phillips, 2011). At the broadest level, all these examples represent a tendency, long noted by philosophers, for people to have trouble maintaining clear conceptual boundaries between what is and what ought to be (Davis, 1978; Hume, 1740/1985).

Theoretical Considerations

Our studies provide little direct insight into underlying mechanisms, but the available evidence is consistent with models of explanatory coherence (Read et al., 1997; Thagard, 2004) which posit that individuals construct beliefs and preferences through a process of parallel constraint satisfaction (e.g., Simon, Krawezyk, & Holyoak, 2004). Coherence-based models subsume classic cognitive consistency theories, but reject simplifying assumptions about linear causal flow in favor of a more dynamic view in which beliefs, feelings, goals, and actions are mutually influential and are adjusted iteratively toward a point of maximal internal consistency or “coherence.”

It is reasonable to assume that moral judgments also involve coherence pressures, which would be best satisfied when descriptive beliefs about an act’s consequences are consistent with prescriptive evaluations of its moral status. A moral coherence view supplements an intuitionist perspective by highlighting the bidirectional relation between moral and factual beliefs. For example, prior research shows that changing beliefs about the consequences of acts changes moral evaluations (Gino, Shu, & Bazerman, 2010; Walster, 1966)4. Our research demonstrates that, conversely, moral evaluations can also shape beliefs about consequences.

Our results challenge simple conceptual distinctions between deontological and consequentialist judgment. Moral intuitionism suggests that rather than reasoning their way to moral conclusions using either deontological or consequentialist logic, people’s moral justifications are guided by visceral reactions about the rightness or wrongness of the action in question (Haidt, 2001). As such, people should be inclined to embrace any justification that coheres with their moral intuitions, whether that justification is a broad deontological rule, information about consequences, or both. Future research should examine if justification processes work similarly whether an individual ultimately approves or disapproves of an act’s morality, but it is crucial to note that coherence pressures will be most pronounced in situations that provoke conflicting moral intuitions (i.e., moral dilemmas). Our data show that under these conditions, characterizing moral judgments as deontological may be particularly misleading, as these judgments are often reinforced by a motivated consequentialist calculus. As Baron and Spranca (1997) wryly phrased it, “people want to have their non-utilitarian cake and eat it too” (p. 13). The recognition that cost-benefit analyses, like other forms of reasoning, are subject to motivational influence also dovetails with recent research questioning the superior normative status often ascribed to consequentialist moral thinking (Bennis, Medin, & Bartels, 2010).

Practical Considerations

The tendency to harness factual beliefs to support moral commitments has social and political implications. For example, abstinence-only sexual education programs often yield poor results, producing little or no delay in first sexual intercourse, sometimes accompanied by increased rates of unprotected sex (Rosenbaum, 2009). This is precisely the pattern our analysis predicts. It is difficult to believe that encouraging condom use is both immoral and effective. One way to resolve this conflict is to come to believe that condoms are ineffective, and abstinence-only programs are well known for disparaging condom effectiveness (Santelli et al., 2006). Interestingly, a recent study found that an abstinence-education program that was explicitly non-moralistic was effective in delaying intercourse with no negative effect on condom use (Jemmott, Jemmott, & Fong, 2010).

More generally, the partisan battles that dominate contemporary American politics are fueled not just by well-documented differences in liberals’ and conservatives’ moral sensibilities (e.g., Graham, Haidt & Nosek, 2009), but also by huge discrepancies in factual beliefs. Resolving differences of moral opinion is challenging enough, but when these differences align themselves with differing perceptions of fact, fruitful negotiation becomes considerably more difficult. Moreover, it should be particularly disheartening for fans of political compromise that the tendency to recruit facts in support of moral positions is likely to be most pronounced in individuals with strong moral convictions and high opinions about how informed they are about the issues–a reasonable characterization of the psychological state of the political elites who most affect policy decisions.

Politicians and pundits are fond of challenging their ideological opponents with a line usually attributed to former Senator Daniel Patrick Moynihan, “You are entitled to your own opinion. But you are not entitled to your own facts.” The current research suggests that in the realm of moral reasoning at least, a clean separation of opinion and fact may be difficult to achieve.



1 The on-line supplement details all study materials.

2 When analyzed separately, both perceived benefit questions show significant differences identical to the combined index (ts≥3.7). Treating the lives saved question as continuous rather than dichotomous produces significant correlations with all three cost-benefit questions such that the more lives required to justify the act, the less beneficial and more costly it was perceived to be (rs≥.36).

3 We conducted follow up analyses on benefit and cost judgments that accounted for participants’ initial moral evaluation of capital punishment. The main effects of essay condition and simple effects analyses remained significant (Fs≥3.98, ps≤.05), suggesting that reading attitudinally-consistent versus inconsistent essays did not affect our findings.

4 Interesting questions arise about the normative status of outcome information in moral judgment (e.g., Hershey & Baron, 1992). Often treated as a bias, it also reflects the rationalist foundation of consequentialism.


Alicke, M. (2000). Culpable control and the psychology of blame.Psychological Bulletin, 126, 556-574

Baron, J., & Spranca, M. (1997). Protected values. Organizational Behavior and Human Decision Processes, 70, 1-16.

Bennis, W. M., Medin, D. L., & Bartels, D. M. (2010). The costs and benefits of calculation and moral rules. Perspectives on Psychological Science, 5, 187-202.

Cooper, J. (2007). Cognitive Dissonance: 50 years of a Classic Theory. Thousand Oaks, CA: Sage.

Cushman, F., & Young, L. (2011). Patterns of moral judgment derive from nonmoral psychological representations. Cognitive Science,35, 1052-1075.

Davis, B. (1978). The Moralistic Fallacy. Nature, 272, 390.

Ditto, P. H., & Liu, B. (2011). Deontological dissonance and the consequentialist crutch. In M. Mikulincer & P. Shaver (Eds.), The social psychology of morality: Exploring the causes of good and evil (pp. 51-70). Washington, D.C.: American Psychological Association.

Ditto, P. H., & Mastronarde, A. J. (2009). The paradox of the political maverick. Journal of Experimental Social Psychology, 45, 295-298.

Ditto, P. H., Pizarro, D. A., Tannenbaum, D. (2009). Motivated moral reasoning. In B. H. Ross (Series Ed.) & D. M. Bartels, C. W. Bauman, L. J. Skitka, & D. L. Medin (Eds.), Psychology of learning and motivation, Vol. 50: Moral judgment and decision making (pp. 307-338). San Diego, CA: Academic Press.

Eidelman, S., Crandall, C. S., Goodman, J. A., & Blanchar, J. C. (2012). Low-effort thought promotes political conservatism. Personality and Social Psychology Bulletin, 38, 808-820.

Eagly, A. H., Wood, W., & Chaiken, S. (1978). Causal inferences about communicators and their effect on opinion change. Journal of Personality and Social Psychology, 36, 424-435.

Foot, P. (1994). The problem of abortion and the doctrine of double effect. In B. Steinbock & A. Norcross (Eds.), Killing and letting die (2nd ed., pp. 266-279). New York: Fordham University Press.

Gino, F., Shu, L. L., & Bazerman, M. H. (2010). Nameless + Harmless = Blameless: When seemingly irrelevant factors influence judgment of (un)ethical behavior. Organizational Behavior and Human Decision Processes, 111, 102-115.

Goodwin, G. P., & Darley, J. M. (2008). The psychology of meta-ethics: Exploring objectivism. Cognition, 106, 1339-1366.

Graham, J., Haidt, J., & Nosek, B. (2009). Liberals and conservatives rely on different sets of moral foundations. Journal of Personality and Social Psychology, 96, 1029-1046.

Greene, J. D., Cushman, F. A., Stewart, L. E., Lowenberg, K., Nystrom, L. E., & Cohen, J. D. (2009). Pushing moral buttons: The interaction between personal force and intention in moral judgment. Cognition, 111, 364-371.

Greene, J. D., Nystrom, L. E., Engell, A. D., Darley, J. M., & Cohen, J. D. (2004). The neural bases of cognitive conflict and control in moral judgment. Neuron, 44, 389-400.

Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108, 814-834.

Hershey, J. C., & Baron, J. (1992). Judgment by outcome: When is it justified? Organizational Behavior and Human Decision Processes, 53, 89-93

Hume, D. (1985). A treatise of human nature. London: Penguin. (Original work published 1740)

Jemmott, J. B., Jemmott, L. S., & Fong, G. T. (2010). Efficacy of a theory-based abstinence-only intervention over 24 months: A randomized controlled trial with young adolescents. Archives of Pediatrics and Adolescent Medicine, 164, 152-159.

Jost, J. T., Glaser, J., Kruglanski, A. W., & Sulloway, F. (2003). Political conservatism as motivated social cognition. Psychological Bulletin, 129, 339-375.

Kahan, D. (2010). Fixing the communications failure. Nature, 463, 296-297.

Kahan, D. M., Braman, D., Slovic, P., Gastil, J., & Cohen, G. L. (2007). The second national risk and culture study: Making sense of - and making progress in - the american culture war of fact. Harvard Law School Program on Risk Regulation Research Paper No. 08-26, Retrieved on April 21, 2009 from http://ssrn.com/abstract=1017189.

Kahan, D. M., Wittlin, M., Peters, E., Slovic, P., Ouellette, L. L., Braman, D., & Mandel, G. N. (2011). The tragedy of the risk-perception commons: Culture conflict, rationality conflict, and climate change. Temple University Legal Studies Research Paper No. 2011-26; Cultural Cognition Project Working Paper No. 89. Available at SSRN: http://ssrn.com/abstract=1871503

Knobe, J. (2003) Intentional action in folk psychology: An experimental investigation. Philosophical Psychology, 16, 309-24.

Knobe, J. (2010). Person as scientist, person as moralist. Behavioral and Brain Sciences, 33, 315-365.

Kohlberg, L. (1969). Stage and sequence: The cognitive-developmental approach to socialization. In D. A. Goslin (Ed.), Handbook of socialization theory and research (pp. 347-489). Chicago: Rand McNally.
Preacher, K. J., & Hayes, A. F. (2004). SPSS and SAS procedures for estimating indirect effects in simple mediation models. Behavior Research Methods, Instruments, and Computers, 36, 717-31.

Read, S. J., Vanman, E. J., & Miller, L. C. (1997). Connectionism, parallel constraint satisfaction processes, and Gestalt principles: (Re)introducing cognitive dynamics to social psychology. Personality and Social Psychology Review, 1, 26-53.

Rosenbaum, J. E. (2009). Patient teenagers? A comparison of the sexual behavior of virginity pledgers and matched nonpledgers. Pediatrics, 123, e110-e120.

Santelli, J., Ott, M. A., Lyon, M., Rogers, J., Summers, D., & Schleifer, R. (2006). Abstinence and abstinence-only education: A review of US policies and programs. Journal of Adolescent Health. 38, 72-81.

Skitka, L. J., Bauman, C. W., & Sargis, E. G. (2005). Moral conviction: Another contributor to attitude strength or something more? Journal of Personality and Social Psychology, 8, 895-917.

Simon, D., Krawezyk, D. C., & Holyoak, K. J. (2004). Construction of preferences by constraint satisfaction. Psychological Science, 15, 331-336.

Taber, C. S., & Lodge, M. (2006). Motivated skepticism in the evaluation of political beliefs. American Journal of Political Science, 50, 755-769.

Tetlock, P. E. (1983). Cognitive style and political ideology. Journal of Personality and Social Psychology, 45, 118-126. Tetlock, P. E. (2003). Thinking about the unthinkable: Coping with secular encroachments on sacred values. Trends in Cognitive Science, 7, 320-324.

Thagard, P. (2004). Coherence in Thought and Action. Boston: MIT Press.

Walster, E. (1966). Assignment of responsibility for an accident. Journal of Personality and Social Psychology, 3, 73-79.

Wright, J. C., Cullum, J., & Schwab, N. (2008). The cognitive and affective dimensions of moral conviction: Implications for attitudinal and behavioral measures of interpersonal tolerance. Personality and Social Psychology Bulletin, 34, 1461-1476.

Young, L., & Phillips, J. (2011). The paradox of moral focus. Cognition, 119, 166-178.

Table 1. Perceived Benefits and Costs of Moral Issues Regressed on Deontological Immorality and Control Variables (N=1291-98)

Table 2 Interaction effects between Deontological Immorality and Gender, Moral Conviction, Feeling Informed About the Issue, and Political Conservatism on Combined Costs/Low-benefits Index (N=1291-98)

Figure 1. Mean change in perceived benefits and costs by essay condition. Positive change represents believing capital punishment to be more beneficial and costly after the essay.
Site Admin
Posts: 36188
Joined: Thu Aug 01, 2013 5:21 am

Return to A Growing Corpus of Analytical Materials

Who is online

Users browsing this forum: No registered users and 4 guests