top of page
Writer's pictureOurStudio

How Does the Brain Secrete Morality?

“The brain secretes thought as the liver secretes bile,” asserted 18th century French physiologist Pierre Cabanis. Last week, the Potomac Institute for Policy Studies convened a conference of neuroscientists and philosophers to ponder how our brains secrete thoughts about ethics and morality. The first presenter was neuroeconomist Gregory Berns from Emory University whose work peers into brains to see in which creases of gray matter those values we hold sacred lodge. The study, “The Price of Your Soul: neural evidence for the non-utilitarian representation of sacred values,” was just published in the Philosophical Transactions of the Royal Society B.

Philosophers often frame arguments over the bases of ethics in terms of deontology (right v. wrong irrespective of outcomes) and utilitarianism (costs v. benefits of potential outcomes). Both utilitarians and deontologists would argue that it is wrong to kill innocent human beings. A utilitarian might tote up the costs of being caught in murder or the harms to a victim’s family, whereas a deontologist would assert it is moral duty to avoid killing the innocent. For most people, a utilitarian reckoning in this case seems cold and psychologically broken (e.g., the kind of calculation that a psychopath would make). The researchers define personal sacred values as those for which individuals resist trade-offs with other values, particularly economic or materialistic incentives.

It is this distinction that Berns probes using functional magnetic imaging (fMRI) to see in which parts of subjects’ brains their moral decision-making is localized. Such scans identify areas of the brain that are activated by measuring blood flow.

Without going into all the details, in the study subjects were asked to choose between various values; some hypothesized to be more deontological and others more utilitarian, e.g., you do/do not believe in God, and you do/do not prefer Coke to Pepsi. Once the baseline was established for each subject, they were given an opportunity to auction off their personal values for real money up to $100 per value sold. Once the auction was over, each subject was asked to sign a document contradicting his or her personal values. Those values that subjects refused to auction off were deemed “sacred.”

Berns and his colleagues found that values identified as sacred were processed in areas of the brain that are associated with semantic rule retrieval. Basically subjects were reading off moral rules; what another conference participant would later refer to as “moral platitudes.” In addition, when sacred values were contradicted by their opposites (e.g., to a believer asserting “You do not believe that God exists”), the researchers found arousal in the amygdala, which is associated with negative emotions.

Not surprisingly, with regard to the personal values that subjects auctioned off, the areas of the brain known to be associated with evaluating costs and benefits were activated. The researchers also suggest that when policymakers try to employ positive or negative incentives to encourage trade-offs in foreign or economic arenas they may instead arouse sacred values provoking a reactionary response in the people at whom the policies are targetted.

Berns also presented the results of another study [PDF] in which brain scans turned out to have identified a song that subsequently became a hit. In an earlier study, Berns and his colleagues had downloaded 15-second clips of various unknown songs from MySpace and played them for 27 adolescents while scanning their brains. The earlier study [PDF] focused on how knowing what others think about an item (in this case, a song fragment) activates brain areas associated with anxiety motivating people to switch their choices in the direction of the consensus. In other words, people often succumb to peer pressure.

Some years later, Berns heard one of the songs on the TV show, American Idol. Berns wondered if something in the earlier scanning data could have predicted a “hit” song. Mining the old brain scans, Berns found that subsequent song sales were weakly but significantly correlated with the activation of “reward” centers in the brains of the scanned adolescents. He speculates that scanning the brains of small groups might be used some day to predict cultural popularity.

The next presenter was philosopher William Casebeer who is now also a program officer at the Defense Advanced Projects Research Agency. In general, Casebeer argues that the moral psychology required by virtue theory is the most neurobiologically plausible. Basically, there is no is/ought chasm between facts and values, and evolutionary psychology properly understood teaches us that it’s Aristotleian virtue ethics all-the-way-down. Ethics is largely a matter of cultivating the proper moral character.

Casebeer’s talk, suggestively subtitled “How I learned to love determinism, but still respect myself in the morning” aimed to deal with the longstanding problem in neurophilosophy of how to square determinism in neuroscience with a moral philosophy that celebrates the freedom and responsibility of agents. Determinism undergirds science in general and neuroscience in particular; there are no uncaused causes. However, our social institutions are shot through with free will agency assumptions. Is it possible to reconcile these two views? Casebeer argues that we should stop talking about free will and instead adopt a language focused on the idea of critical control centers.

Casebeer thinks that holding agents responsible depends on the notion of being in or out of control. Being in control depends on what he calls the functional architecture of a well-ordered psyche. To suggest an idea of what elements might constitute an appropriate functional architecture of the psyche, Casebeer urged us to consider a schema of meaningful control distinctions [PDF] devised by philosopher and artificial intelligence theorist Aaron Sloman. A large working memory gives a putative agent more control than a small one; so too does an ability to learn versus a fixed repertoire; having a theory of mind versus having none; ability to reason counterfactually versus none; robust reward prediction mechanism versus a weak one; and a multi-channel sensory suite versus a single one. Along these dimensions organisms (and perhaps one day artificial intelligences) can be ranked from microbes to humankind with regard to being more or less in control.

Another mechanism of control is the environment in which an organism exists. In the case of humans, Casebeer argues, that a lot of outside control exists in our culture, norms, and institutions. We tell each other moral narratives in which we explain how internal control factors relate to the external environment. We take our cues of what is right and wrong to do from watching and emulating others. Our brains transmute these moral narratives into our moral characters. In other words, these narratives tell us what sorts of things are sacred (no trade-offs) and what can be evaluated on the basis of costs and benefits.

In some environments we recognize that any control system can become overwhelmed and no one would be held responsible for what they do in such a circumstance. For example, if someone spikes your coffee with LSD and you ended up harming someone because you hallucinated that they intended to kill you. Clearly, we already assign various levels of culpability based on our evaluation of an individual’s ability to control himself, e.g. children, the mentally ill, etc.

At the end, Casebeer suggested that the research agenda for the next 100 years would generate a neuroscience of critical control distinctions. He predicted that many critical controls will be social; narratives involving the punishment of moral infractions and the reward of moral conduct become etched in our brains and build our moral characters.

Next, University of San Diego neurophilosopher Patricia Churchland asked, where do values come from? She pointed toward Charles Darwin’s notion of the moral sense that arises from a combination of our in-born social instincts, our habits, and our reason. Neuroscientists now know more about the neurotransmitters involved with our social instincts. The hub of these instincts is the molecules oxytocin and vasopressin that encourage attachment and trust. Mammalian attachment and trust are the platform from which moral values derive. Bigger brains help by giving humans greater capacity to learn habits and override and repress impulses and to plan. Better memories help us keep track of who did what to whom and why, thus enabling us track reputations and seek out cooperators. Culture is an essential part of the story, guiding and limiting our moral choices.

Churchland cited Eleanor Rosch’s work on concepts and categories [PDF], the famous example of which is the category of vegetable. If you are like 90 percent of Americans, you thought of a carrot first as the prototypical example of a vegetable. The point is that categories have fuzzy edges; radishes are clearly vegetable, but what about wild mushrooms? Mushrooms are located in the same aisle in the supermarket after all. These categories depend on pattern recognition; human beings analogically reason about how new cases related to earlier categories.

Churchland’s argument is that moral categories too have fuzzy edges whose boundaries are different for different cultures. Take the example of the 9/11 atrocities. If you analogize it to the Japanese attack on Pearl Harbor, this suggests the proper response is war. On the other hand, if you analogize it to the Oklahoma City bombing, that suggests that it is matter for the police and the courts. We have moral prototypes of what it means to be a friend, brave, kind, and honest. However, cultures will differ over what counts as honest at the edges. Again, our particular social institutions structure our moral expectations and very different behaviors can emerge depending on a culture’s set of institutions.

So on Churchland’s account the project of using neuroscience to uncover some kind of universal human morality looks likely to fail. Philosopher John Shook cast further doubt that a new neuroethics could help using scientific scrutiny to re-engineer ethics. It might be the case that no sophisticated ethical system can be improved beyond a set of basic human moral norms. These moral norms consist of a common set of virtues that people teach their children, e.g., respect your abilities and try to improve them; don’t betray group efforts for personal gain, etc. These norms amount to ethical platitudes that are good enough for most people. Apparently, Shook thinks that most ethical norms are very like what Berns calls sacred values, e.g., social rules that are simply read off and acted on when a relevant case arises.

When I asked him about utilitarian thinking, Shook declared that it was recently devised by some philosophers (Hobbes, Locke, etc.) as a way to justify a certain kind of politics that attempted resolve conflicts within society. There was no utilitarian moral thinking during the time when our ancestors roamed as hunter-gatherers. Morality consisted of a set of rules that regulated social life in small bands, virtue ethics writ small.

Pondering the various presentations, can neuroethics tell us anything new and useful about ourselves? Berns’ research appears to vindicate the sense we all have that some things are just right or wrong and damn the consequences, whereas in other situations consequences matter and we must weigh the harms and benefits that actions impose on others. I interpret Shook as being something of a moral pessimist; neuroscience will most likely end up telling us that we really can’t do much to improve our moral thinking and systems.

The message from Casebeer and Churchland is that institutions matterâ€"but interestingly they refrain from saying that some institutions (and the moral consequences that flow from them) are objectively better than others. Actually, Casebeer’s project to naturalize Aristotleian virtue ethics suggests a way to determine if one set of social institutions is better than others; do they enable and enhance human flourishing? In my view, human prehistory and history has been a more or less random search for social institutions that increasingly discover and conform to our evolved natures. My contention is that is manifestly the case that liberal institutions, e.g., respect for persons, free markets, the rule of law, religious tolerance, and democracy, do contribute to human flourishing. I suspect that a scan of my brain might find that that conclusion amounts to a sacred value for me.

Ronald Bailey is Reason's science correspondent. His book Liberation Biology: The Scientific and Moral Case for the Biotech Revolution is now available from Prometheus Books.

0 views0 comments

Comments


bottom of page