Do No Harm | Psychology Today
Skip to main content

Verified by Psychology Today

Do No Harm

Looking for maximum sustainable goodness.

Let us try to teach generosity and altruism, because we are born selfish. —Richard Dawkins

"Do no harm" is a minimum ethical requirement. "Help as much as you can – without doing undue harm to yourself" is a more demanding imperative, and it provides a taste of the difficulties we face when trying to do the right thing. What is "undue harm" to the self? How are we to weigh the benefits we can create for others against the sacrifices we need to make?

Max Bazerman is a rational ethicist or perhaps an ethical rationalist. He believes that clear thinking can help improve the general welfare of humanity, other living things, and the planet as a whole. Perfection, as indicated by the title of his latest book, (Bazerman, 2020) may not be had, but we can do better than we have.

Self-sacrifice

As a utilitarian, Bazerman seeks to promote "the greatest amount of good for the greatest number" (Driver, 2014). A strict utilitarian will rather take $11 if Max gets $1,000 than taking $10 if Max also gets $10. Are you a strict utilitarian? And what would you do in the trolley problem? Utilitarianism says you should sacrifice one person if you can save the lives of five (or even two) others. As the self enjoys no special status, you should kill yourself if you can save more than one person, whoever they are. If you bristle at this moral imperative, Bazerman reports on a recent study in which respondents liked utilitarianism more if they made their judgments from behind a veil of ignorance. Most people will not agree to have their organs harvested to save five dying people in a hospital. If, however, they are told to imagine themselves in a group of six, knowing that five would fall ill, they are more likely to accept the sacrifice of a healthy individual. The question is whether the endorsement of utilitarianism from behind the veil of ignorance justifies its application when the facts of need are known.

There is room for doubt. The thought experiment assumes that either the six people are interchangeable or that their assignment to the sick and healthy categories was perfectly random. The person who finds herself in the healthy state should run the thought experiment after the fact and realize that it could have been her in the sickbed. If she declines to do this thought experiment, she is said to be guilty of an outcome bias (Krueger et al., 2020). She happens to be healthy and fails to realize that it could be otherwise.

Equality and randomness are abstractions that real settings rarely provide. One may ask why the healthy person is in the hospital. The five patients are there because they are sick, but why is the healthy person there? Is she visiting a sick relative? If so, she could have come at a different time and avoid being included in the group of six. Now, she is supposed to show an outcome bias as she is not entitled to consider that things could have been otherwise.

One might also ask whether there are other healthy persons who could step up to the plate. The veil experiment finesses the "why me" question. Might not someone else enter the lottery of death? But who? Visitors of the ward? Anyone in the hospital? Are medical personnel exempted? And if so, do they contribute more to "the good for the greatest number?" This question opens the door to ranking individuals on metrics of expendability vs. deservingness. At its utopian limit, utilitarianism is omniscient. When every person’s value is known in terms of what they "contribute to the good," the veil of ignorance is torn, and the least valuable person is the first to be sacrificed. It’s an accountant’s ethic; it works better in thought experiments than in the real world.

Thought experiments showing that people can think along utilitarian lines have the luxury of choosing abstract and reductive circumstances. The results of such experiments have normative force only if such circumstances can be found out in the world. The veil of ignorance conceals more than it reveals. It asks people to ignore information about the world that they do have, and that information may be important.

Scope neglect

As a utilitarian, Bazerman supports effective altruism. If you have money to give, spend it where it will do the most good. This is hardly controversial. One of its implications, however, is not straightforward. Bazerman reports on a study that found scope neglect, or an unwillingness to donate more money to do more good. Respondents were asked how much they would donate to save 2,000 or 20,000 or 200,000 migrating birds endangered by an oil spill. There was hardly any variation across these groups. The mean (hypothetical) donation was about $80. How bad is this result? Note that each respondent saw only one number referring to the endangered birds. Perhaps respondents felt that they could afford to spend $80 on an ecological cause and if more birds could be saved with it, so much the better.

One might turn to a repeated-measures research design and present each respondent with all three numbers of savable birds. Being able to see the different degrees of benefit, respondents might spend more money to save more birds (indeed, Bazerman himself argues that rational decision making involves making comparisons). Even if the respondents had a fixed budget for ecological spending, they could apportion it so that a larger percentage has a greater impact. Another method would – as long as pledges are hypothetical – provide respondents with an endowment of money and ask them to distribute it over the three groups of birds. Respondents might also receive three endowments, say $50, $80, and $150, and be asked to commit these amounts to the three numbers of endangered birds. You can probably hear methodologists cry foul, claiming that demand characteristics will overwhelm true preferences.

Demand characteristics typically involve the presence of information that should not be there, that is, information that tells respondents what to do, while undercutting their true preferences. The inverse problem to demand characteristics is harder to spot — and it doesn't even have a name. When respondents are given too little information, their task is underdetermined or even indeterminate. Giving respondents only one of the three numbers of birds might just have created this kind of problem, which respondents solved simply by reaching into their hypothetical wallets to express how much they care about contributing to ecological relief.

The identifiable victim

If given a budget, people donate more to help a specific individual than to help people in the aggregate. This is the identifiable victim effect (Small et al., 2007). To eliminate the difference, one would hope that people can be made to give more to the statistical victim. Small et al., however, found that when respondents were informed about the identifiable victim effect, they ended up giving less to the individual, a result the authors call “perverse.”

Why did the identifiable victim effect occur in the Small et al. study? The narratives presented to the respondents provide a clue. Whereas the identified victim is said to receive food, education, and medical care, the statistical victim will only receive food (see the appendix on p. 152). Relatedly, one might ask if it is better to distribute very small amounts of assistance over a large number of victims or bundle the assistance and give it to one. The difference might be between no life saved and one life saved. "If you want to help, focus your efforts" is a rule of thumb that can be rational under some circumstances.

The real problem with the identifiable victim is that many donors end up over-volunteering (Krueger, 2019). When Baby Jessica fell into a well, she received a total of $700,000 in aid. The problem of over-helping is avoided when donors can assume that their donation will go to one victim, while other donors’ money will go to other victims, an expectation that is raised in the Small et al. narrative.

References

Bazerman, M. H. (2020). Better, not perfect: A realist’s guide to maximum sustainable goodness. HarperCollins.

Driver, J. (2014). The history of utilitarianism. In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/utilitarianism-history/

Krueger, J. I. (2019). The vexing volunteer’s dilemma. Current Directions in Psychological Science, 28. 53-58.

Krueger, J. I., Heck, P. R., Evans, A. M., & DiDonato, T. E. (2020). Social game theory: Preferences, perceptions, and choices. European Review of Social Psychology, 31, 322-353.

Small, D. A., Loewenstein, G., & Slovic, P. (2007). Sympathy and callousness: The impact of deliberative thought on donations to identifiable and statistical victims. Organizational Behavior and Human Decision Processes, 102, 143-153.

advertisement
More from Joachim I. Krueger Ph.D.
More from Psychology Today
More from Joachim I. Krueger Ph.D.
More from Psychology Today