Free Novel Read

The Fear Factor Page 25


  But as we plotted further and further out on the social distance axis, the two groups began to diverge. By their fifth-closest relationship, controls were willing to sacrifice only $65. But altruists hadn’t budged. They responded just as they had for their closest loved ones. By position 20, controls’ willingness to sacrifice had dropped by about half, to $45, following closely the arc predicted by Jones and Rachlin. But altruists’ discounting slope remained so shallow that they were choosing to sacrifice as much for their twentieth-closest relationship as controls were for their fifth-closest. And on and on it went, until the most distant relationship (one-hundredth-closest), by which point controls would sacrifice only about $23—roughly one-quarter as much as they would for a loved one. By contrast, altruists elected to sacrifice more than twice as much as controls—$46—to share $75 with a near-stranger. Their generosity had dropped by less than half.

  These results suggest a simple reason why altruistic kidney donors find the decision to share such a precious resource—their own internal organ—with a stranger so intuitive: they don’t discount the welfare of strangers and near-strangers as much as the rest of us do. To them, it is nearly as worthwhile to make a sacrifice for someone whose name they don’t know or whom they have never even met as it would be for most of us to sacrifice for our closest friends and family. In the words of one donor we have worked with, “I see the world as one whole. If I do something for someone I loved, or for a friend… why would I not do it for someone I did not know?” This tendency really does seem to be alloparenting on overdrive, particularly because this generosity emerges even in the absence of vulnerability or distress cues—subjects in the social discounting study never saw or heard another actual person, but only imagined them.

  These findings also reinforce the critical distinction between “can” and “does.” Altruism is not simply a matter of having the ability to experience compassion and provide care. Nearly everyone can be compassionate and caring—at least for some people. The real question is, what do you do with that capacity when the person in need of your compassion and generosity is a stranger?

  This, of course, leads to another question: can the rest of us flatten out our discounting curves more? Can we become more like extraordinary altruists?

  On one level, the answer is almost certainly yes. All the social changes that are already occurring prove it. If people are becoming less violent and more altruistic toward strangers all over the world, then we must all be coming to care more about strangers’ welfare than we used to. It would be impossible for any kind of widespread genetically rooted change in the capacity for compassion to have occurred during this period of time, so these changes must reflect cultural shifts instead. Somehow these shifts are causing us to place increasingly more value on the welfare of strangers and flatten our discounting curves—or, as the philosopher Peter Singer and others have put it, to expand our “circles of compassion.”

  I think of discounting as a mountain on which the self stands at the pinnacle. The slopes of the mountain represent social discounting. If the mountain’s slopes are steep, like the Matterhorn, the person at the pinnacle values their own welfare high above that of others, and the welfare of their close friends and family high above the welfare of anyone more distant. Very distant others’ needs and interests are down in the foothills and can barely be seen through the haze. What factors might help to compress this mountain a little—flattening its slopes to more closely resemble the gentle silhouette of Mount Fuji—such that the welfare of more distant others is not so steeply discounted?

  3. More self-control is not the answer.

  Steven Pinker has suggested several possible reasons for ongoing declines in cruelty and violence over time. Some of them might influence social discounting, but others probably don’t. For example, one factor that may have contributed to declining violence—but not because it makes us fundamentally care more about strangers—is the rise of centralized governments. Centralized governments oversee the resolution of conflicts and the distribution of resources among individuals—and more importantly, among clans and tribes and nations. When a relatively impartial state mediates disputes, it interrupts the cycles of vengeance and retaliation that erupt when disputes must be resolved by the individuals involved in them. Later, during the Middle Ages, the severe punishments that the state meted out to criminals further reduced the appeal of criminal violence, while the rise of state-regulated trade and commerce increased the appeal of cooperation. According to Pinker, these changes reduced violence for two reasons. First, they shifted the incentives surrounding both cruel and cooperative behavior, rendering violent solutions to provocation or frustration both less necessary and less likely to yield desirable outcomes. Second, these changes may also have changed the social norms surrounding violence. As more wealth and status began to accrue to people who inhibited their aggressive impulses, the ability to exert self-control came to be viewed as more desirable.

  Although changes in people’s ability or tendency to exert self-control may partly explain declining violence, it almost certainly cannot explain increasing care and altruism toward distant others, because altruism in response to others’ distress or need is fundamentally emotional, not rational. As is true for the most common form of aggression—the hot, reactive, and frustrated kind—altruistic urges emerge from deep, primitive emotional structures in the brain. This is clearly true of compassion-based altruism, but it is also true of much learned altruism, and probably of kin-based altruism as well. (Reciprocal altruism is the closest to being genuinely rational, although it too is supported by activity in a subcortical agglomeration, in this case the striatum, that drives reward-seeking.) Ancient subcortical brain structures respond quickly and intuitively to altruism-relevant social cues, like vulnerability and distress in the case of compassion-based altruism. This is probably why altruistic kidney donors overwhelmingly report that their decision to act bubbled up quickly, and in many cases unexpectedly, in response to learning about someone suffering or in need. As one altruist told us, when he first spotted a billboard about someone seeking a kidney, “It was just like I was compelled to do it. The only thing I can figure is God reached down, poked me in the side, and said, ‘Hey, go help your fellow man.’… It was just overwhelming, I just wanted to do it. I have no clue why.” Another decided to donate after wandering by a booth at a health fair and learning about the dire need for kidneys. She recalled thinking simply, I’m pretty healthy. I have two kidneys. You got anybody that needs one? Both Lenny Skutnik and Cory Booker recounted that their decision to act was a fast and impulsive response to another person’s distress. My own heroic rescuer’s decision must have been nearly instantaneous as well—he would only have had a second or so to decide whether to stop and help me. When altruism arises this way, from primitive, emotional processes, the only effect that self-control could possibly have is to suppress it, much as it suppresses aggression.

  My colleague David Rand, a behavioral scientist at Yale University, has collected systematic data supporting the idea that generosity toward strangers results from fast and intuitive processes and that rational deliberation suppresses it. He and his students have amassed a wealth of data from experimental simulations, including the Prisoner’s Dilemma, showing that people who respond the most generously usually do so quickly and without a lot of thought. The more time people take to stop and reflect, the less generous or altruistic they will ultimately be.

  Rand and his colleague Ziv Epstein have also examined the cases of dozens of real-life altruists who were awarded the Carnegie Hero Medal for confronting extraordinary risks to save another person’s life. (Lenny Skutnik is one of them.) They wanted to know whether, when facing real risks, people still leap into action first and only later stop to consider the risks, or do they exert self-control to overcome their fear for their own safety? To answer this question, they combed through news archives to find interviews with people who had received Carnegie Hero Medal awards between
1998 and 2012 and extracted fifty-one heroes’ explanations of why they had acted. Among these explanations were statements like the following:

  I didn’t feel any pain at the time, I think adrenaline kicked in as my only thought was that I had to get to the women, I never felt any pain. The adrenaline just kicked in. I was trying to get there as fast as I could.

  And:

  The minute we realized there was a car on the tracks, and we heard the train whistle, there was really no time to think, to process it.… I just reacted.

  The researchers then had raters evaluate how fast and intuitive versus deliberative and rational each decision was. They also asked them to estimate, based on the details of each situation, how many seconds each rescuer had to act before it would have been too late. Finally, they ran all of the heroes’ descriptions through a software program that coded for certain kinds of language, like words and phrases associated with the exertion of self-control.

  I’m sure you can guess the results from the descriptions I’ve given you. Nearly half of the heroes described themselves as having acted without thinking at all, and their descriptions received the highest possible “fast and intuitive” score. Altogether, 90 percent of these altruists received ratings on the “fast and intuitive” end of the scale rather than the “deliberative and rational” end. This was true even for those rescues that had allowed at least a little wiggle room in terms of time—perhaps a minute or two to contemplate whether to act or not. The researchers ultimately found no relationship between how much time was available and how intuitively the altruists responded, suggesting that intuitive responding was not the inevitable outcome of a fast-moving emergency. The computer algorithm confirmed these findings, showing that heroes’ descriptions of their decisions incorporated little language suggesting that they had attempted to exert self-control. Together, these findings reinforce the idea that, rather than being deliberate attempts to be noble, urges to care and cooperate are deeply rooted in parts of the mammalian brain that may drive us to act on others’ behalf before we fully understand what we are doing or why.

  This fact has given me some pause about a growing movement called effective altruism, which is aimed at encouraging people to restrain their initial altruistic impulses in order to accomplish the greatest objective good. The movement was inspired by the work of the philosopher Peter Singer, and its advocates’ explicit aim is to convince people to donate to charity only after conducting comprehensive research into the objective impact that their donation will yield. The problem, in Singer’s view, is that we are prone to give to causes that happen to tug at our heartstrings—the GoFundMe campaign we saw on Facebook, our local animal shelter, a charity that collects toys for homeless children in our community—rather than rationally planning out altruistic giving to yield the greatest objective good. Instead of helping the GoFundMe family, the pets, and the homeless children, why not use that same money to buy bed nets for dozens or hundreds of families in Africa, reducing their risk of contracting malaria? Wouldn’t this result in objectively better outcomes, and wouldn’t that be preferable? (Remember how much more good it does to improve the lives of those who start out the worst off?)

  I couldn’t possibly disagree with the idea of using charitable donations effectively. But I see two problems with the philosophy. First, I doubt that there is usually a way to determine what constitutes the greatest objective good. Many would agree that saving five children from malaria is more valuable than buying supplies for an animal shelter (although others would not), but is it more valuable than donating to a university to support malaria vaccine research? How about supporting research on diabetes, which affects more people at any given time than malaria? Or spending time that could otherwise have been spent fund-raising to prepare for and recover from a kidney donation to save a single person’s life? What if that person was a malaria vaccine researcher? Answering these questions requires making so many guesses and assumptions and subjective value judgments that any attempt to arrive at an answer using sheer rationality would quickly spiral into a vortex of indecision.

  “A vortex of indecision” is, by the way, where people actually end up when, following a brain injury, they are forced to use only logic and deliberation to make decisions about the future. Such injuries leave IQ and reasoning ability intact, but they prevent the affected from incorporating emotional information from deep within the brain into their decision-making. It turns out that intellect and reasoning alone are not sufficient for making complex subjective decisions. People who cannot generate an intuitive feeling of caring about one outcome more than another struggle for hours to make decisions about even basic things like which day of the week to schedule a doctor’s appointment—the kind of decision, like so many others, for which there is no purely rational answer.

  As he accepted his Nobel Prize for literature in 1950, the philosopher Bertrand Russell declared, “There is a wholly fallacious theory advanced by some earnest moralists to the effect that it is possible to resist desire in the interests of duty and moral principle. I say this is fallacious, not because no man ever acts from a sense of duty, but because duty has no hold on him unless he desires to be dutiful.” Ultimately, the gut-level, irrational feeling of just caring more about certain causes than others is what moves people to help. Desire, not reason, drives action. This is why even the most sophisticated computers don’t yet act of their own accord, despite having perfect reasoning abilities—they have no feelings or desires. Psychopaths are often highly rational, but this does not drive them to provide costly help to others, believe me, because they lack the emotional urge to do so. And those who do go to great lengths to help others overwhelmingly describe their motivations in terms of impulse and feelings. Consider the case of Robert Mather, the effective altruist and founder of the Against Malaria Foundation, which has been called the most “effective charity” in the world. In Mather’s own telling, he was first moved to devote his life to charity work, not by clear-eyed rationality, but because he stumbled upon the story of a single little girl who had been horribly burned in a fire, and whose story moved him to tears.

  Even when people do describe their decisions in terms of clear-eyed rationality, their brains may tell a different story. One altruist who participated in our research described his decision to donate in beautifully utilitarian terms. Upon first reading a news article about altruistic donations, he said,

  it clicked with me immediately, and then I thought: this is something that I could do, and something I was comfortable with. So at that point, I did some research on the web about side effects and mortality rates and the possibility of comorbidities afterwards, and I was very comfortable that the risks were low and acceptable to myself, and that the benefits to the patient—especially if they were already on dialysis—the improvements in lifestyle and improvement in life span and their ability to get back in their life, the benefits were great.

  He conducted, in other words, a simple cost-benefit analysis. When I asked about any other thoughts or feelings he might have been having that contributed to his decision to donate, he replied, “I guess I would say I am super-rational—I do not get emotional about the decisions I make.” On one of the standard empathy scales we used in the study, he reported his own levels of empathy to be very low. I had no reason to doubt anything he said about himself. It was clear from his professional accomplishments in the technology sector and the way he described other decisions he had made that he was capable of sophisticated rational analysis. But our data revealed that rationality was not all that he was capable of.

  When we first examined the scatterplots describing our brain imaging and behavioral data, one of the altruists had stood out from the rest. Of the nineteen we had tested, one scored nearly perfectly in terms of his ability to recognize fearful facial expressions—the top scorer of all our participants. This person also showed a robust amygdala response to fearful expressions during the brain scan—easily in the top half of the altruists. Who was t
his super fear-responder? None other than our self-described super-rational, low-empathy altruist. I believe this altruist considered himself to be low in empathy. And he may well have been relatively low in the sort of cognitive empathy that is linked to Theory of Mind and autism. But he also had remarkably high levels of the kind of empathy that is important for caring and altruism: sensitivity to others’ displays of vulnerability and distress.

  Of course, this one data point cannot prove that this altruist’s heightened sensitivity to others’ fear was the cause of his extraordinary actions. But it does prove that you should never take people’s self-reported empathy at face value. Just as Daniel Batson’s subjects were led to believe that a placebo called Millentana had led them to behave altruistically, so can all of our brains easily mislead us about the causes of our own behavior and feelings and decisions.

  Everything we know from the laboratory suggests that deliberation and rationality are not what ultimately drive people to care. Indeed, the more deliberatively and rationally people think about generosity, the more likely they may be to suppress their initial urges to help, and the less generous they ultimately become. Viewing people’s natural desires to help particular causes as a springboard to action, rather than as something to be suppressed or overridden, seems to me like it would be a more genuinely effective approach than insisting on pure altruistic rationality.