Behaviorally Realistic Solutions to Environmental Problems

Behaviorally Realistic Solutions to Environmental Problems

Why don’t people: set thermostats to use less energy when they are out of the house; cancel mail catalogs they never open; check their tire pressure regularly; ask whether their nursery stocks non-invasive, native plants; buy well-made clothes that are comfortable to wear for a long time; clean out their basement on hazardous material pick-up days? It is puzzling when people don’t do seemingly simple things that are good for them and the environment – especially when we are those people.

It is infuriating to see such behavior in organizations viewed as having an obligation to know, and to do, better. Why don’t they maintain their equipment properly, not idle their vehicles pointlessly, consult neighbors when they are going to make noise (or dust or traffic), restore sites when they are done with them, buy everything they can locally, tell users what the risks are with smart meters (or genetically modified crops or plastic bottles)?

Sustainability depends on people making sound decisions. They need to do things differently, even when that requires new equipment, special training or studying options. They need to bring other people along, even when that means taking the lead, persuading their friends or acquiring expertise. They need to make things happen, even when that means reading (and rereading) instructions, asking for help or rejecting flawed solutions.

When people don’t do their part in making sustainability work, it can be tempting to give up on them, thus concluding they are inept, ignorant, immoral, inscrutable or just impossible. Such pessimism seems consistent with the message in the flood of popular books about the psychology of decision-making. Among professional science writers, Malcolm Gladwell and Atul Gawande understood early in what ways that research could illuminate how one’s intuitive thinking can undermine important decisions – resulting in eminently readable essays like those collected in What the Dog Saw and Complications. Books by researchers with a gift for story telling include: Dan Ariely’s Predictably Irrational with its self-explanatory title, Max Bazerman and Ann Tenbrunsel’s Blind Spots on all-too-common moral failings, and Richard Thaler and Cass Sunstein’s Nudges about the inertia supporting many poor choices.

At first reading these accounts make decision-making research seem like the science of human frailty, depicting people as overconfident, inconsistent, gullible, self-deluding slaves to social pressure, driven by transient emotions and unwilling to admit mistakes. If people are really that flawed, then it is hard to imagine them making, then implementing, the decisions needed for sustainable change.

However, a closer reading reveals a more complex and hopeful picture. It is wonderfully articulated in Thinking Fast and Slow by Daniel Kahneman, who founded the field along with his late colleague Amos Tversky, both of whom I was fortunate to have as graduate advisors.

© iStockphoto.com/frender

© iStockphoto.com/frender

In their account, perfect choices are impossible in a complex, uncertain world, which poses so many different, difficult decisions that people cannot possibly master them all. Their research showed how people respond to this burden of choice by relying on heuristics – “rules of thumb” that often work well but can let people down, resulting in biases.

For example, “if it ain’t broke, don’t fix it” is a heuristic that can avoid needless expenses but also produce unpleasant surprises when maintenance is deferred too long. It is a rule that can avoid gambling on unproven solutions when current ones work just fine. But, it can also leave people lagging behind their innovative competitors or adversaries. “When you hear hoofbeats, think horses, not zebras” is a heuristic that focuses on the most common risks but can also leave one vulnerable to new risks or old ones that no one liked discussing.

Both these “rules of thumb” are special cases of the availability heuristic, whereby one judges the likelihood of an event by how easy it is to remember or imagine it happening. That heuristic often works well because people are good at estimating how frequently they have seen things happen – so much so that keeping counts appears to be something one does automatically. Relying on availability can misfire, though, when a person fails to realize that appearances are deceiving, with some events being disproportionately visible (e.g., homicides) and some being disproportionately hidden (e.g., suicides). When that happens, major problems can be out-of-sight and out-of-mind while minor ones dominate a person’s attention.

In these respects experts are fallible people, too, when they lack proven solutions and must rely on their judgment. It may be professional judgment, but it is judgment nonetheless. Experts, too, can miss things hidden or changing. Experts, too, can fail to realize when they have been skating on increasingly thin ice. National security decisions pose the kinds of threats that can foil expert judgment, which are painfully revealed when adversaries discover neglected weaknesses. This is also true of natural security decisions, which are also painfully revealed when environmental collapses show how dependent humanity is on services provided by aquifers, wetlands, shorelines, arable soil and clean air – not to mention the vistas, quiet and habitat necessary for hiking, hunting, well-being and a legacy for children.

It would be nice to imagine a radical makeover, which would transform humans into superb decision-makers. However, little evidence exists for simple steps that will make much difference. People do not become better decision-makers by thinking harder, memorizing lists of biases or reading about others’ misfortune. Indeed, relying on simple measures to perform miracles could make matters worse if it left a person feeling more confident as a decision-maker without being more proficient. Dissecting others’ errors can be particularly dangerous if hindsight bias obscures how difficult decisions really are to the people facing them.

Decision-making processes are deeply ingrained in how humanity has learned to perceive its world. As a result, people need to recognize they are stuck with fallible decision-making processes. What one can realistically do is to coax as much mileage as possible from imperfect heuristics, recognize their limits well enough to hedge one’s bets and get help when needed.

dmaking02Richard H. Thaler and Cass R. Sunstein describe one form of such help: Let experts design the architecture of their choices so as to “nudge” people to make the decisions they think are right. One key aspect of that choice architecture is the default option – an automatic “opt-in” unless the person decides otherwise. As an example of the power of defaults, Eric Johnson and Dan Goldstein showed how much higher organ donation rates were in countries where the default was giving, rather than not giving – even with seemingly similar countries, such as Denmark and Sweden.

Defaults can lead to better decisions if they communicate important social norms regarding what one should choose. Defaults can lead to worse decisions if they steer people in situations where they are so confused or rushed that they unthinkingly follow poorly chosen defaults.

The organ-donation example shows the elements of a scientific approach to decision-making. Researchers begin by taking an “inside view,” analyzing the decisions facing people. In Pennsylvania the organ donation decision is typically made while renewing a driver’s license with no explanation of what it means to check the relevant box and a line of people waiting for one to finish. Unlike harried drivers, researchers can assemble the facts needed to understand the choice (e.g., how badly organs are needed, whether donors are treated differently in the emergency room, what different faiths advise).

After analyzing the choice, researchers predict how people will actually make the choice, thus drawing on the behavioral principles their science has identified. In this case several of those principles point to the same prediction: When uncertain about what to do, people naturally look for cues regarding what other people do (hence, they may look to the default for guidance). People prefer erring on the side of inaction rather than action (hence, they may stick with a default option). People hate to give things up (hence, they may find themselves valuing the “right” to keep their organs posthumously). People dislike ambiguous choices (hence, they may stick with the default rather than thinking too hard).

Box 1 shows more behavioral principles. Sources at the end of this article provide even more. The fact that so many principles exist means that scientists cannot easily predict how people will make specific decisions unless all the principles point in the same direction – as seems to happen in Penn. Dept. of Motor Vehicle (DMV) offices. When well-established principles point in different directions (or no clear direction at all), research is needed to see what people actually do.

The need for that research is often underestimated. People are all amateur psychologists trying to predict one another’s behavior. One guiding heuristic is assuming one sees things the way everyone else does, unless told otherwise – as when women are told how men think differently and vice versa. Although often useful, that heuristic leads to exaggerating how well one can read others’ minds. Psychologists are additionally vulnerable to a variant of availability bias when they make predictions. They study specific phenomena so much (e.g., biases, emotions, social pressure) they begin to see them everywhere and exaggerate their importance.

When individuals know what they want from a decision, they should not be affected by its choice architecture. For example, people whose religion forbids organ donation should never check the donor box on their driver’s license, if they understand that choice. Whether people have, in fact, made informed choices is another question for researchers to answer. They may, for example, observe drivers as they make their choices (seeing what they read), interview drivers after leaving the DMV (seeing what they remember) or evaluate the readability of license forms (seeing what information people with different literacy levels could, conceivably, extract from them). If communications fail those tests, then drivers have been manipulated into choices against their will – and perhaps against their best interests.

Even if people end up making choices that are right for them, they may bristle at being manipulated – through choice architecture or anything else. Rather than directed by a default, people may want to have all options presented equally under conditions that help them make informed, independent choices. They may ask questions like, “Who gave anyone the right to steer me in any direction? What precautions are there for people who are steered in the wrong direction? Why isn’t everyone given easy access to the facts that they need, letting them decide which to master?”

People ask such questions because they often care about how decisions are made as well as how they turn out. If they care enough, they can attack an entire program even if it brings them good results. For example, “smart” electric meters sporadically encounter intense consumer opposition despite their potential ability to reduce electricity consumption. When people oppose a seemingly appealing technology, they may be badly informed. Or, they may distrust the people behind the technology, and their claims about its risks and benefits. One way to lose trust is by manipulating people without their permission.

Every decision and decision-maker is special in some ways. However, decision science approaches them all with the same basic steps:

  1. Analysis: Estimate the risks and benefits from making different choices.
  2. Description: Study what people currently believe about those risks and benefits.
  3. Solution: Try to improve decision-makers’ understanding, testing how well solutions work and repeating as necessary.

Aiding the Decisions that Began this Article Might Yield These Results:

Thermostats

Analysis: People could stay comfortable and save money by setting their thermostat to kick-in one-half hour before they come home.
Description: People cannot figure out how to adjust their thermostats because displays are unintuitive and instruction booklets lost or useless.
Solution: People may do better with well-executed online instructional videos and well-trained call center personnel.

Mail-Order Catalogs

Analysis: People could find most of the products they need online.
Description: People do not realize the environmental burden created by catalogs (even if recycled) or how much they add to the cost-of-goods.
Solution: People may do better if they had a single phone number (or URL) to stop all catalogs along with connections to online links.

 

© iStockphoto.com/VCTStyle

© iStockphoto.com/VCTStyle

 

Tire Pressure

Analysis: People could save money and be safer with fully inflated tires.
Description: People mistakenly expect underinflated tires to bulge. They cannot get pressure gauges to work. They resent paying for air at gas stations.
Solution: People may do better with brief instructions about the unintuitive properties of tires and with better-designed gauges. Providing free air requires addressing the revenue needs of gas stations.
Garden Plants

Analysis: People would be better off with native plants, which are hardier (because they are adapted to local conditions), less expensive (because they need to be replaced less frequently) and better at attracting birds, bees and butterflies.
Description: People know little about the botany of their gardens or the threats posed by invasive species.
Solution: People may do better with some horticultural education and with plant suppliers who market to consumers by cultivating an interest in native plants.

The Same Questions Could be Asked About Company Behavior:

Equipment Maintenance

Analysis: Companies could save money through better maintenance by reducing accidents and large repairs.
Description: Companies may not realize their vulnerability if their employees lack the training to spot problems and the incentives to address them.
Solution: Companies may do better with insurance policies that reward near-term expenditures that reduce long-term costs.

And so on. In each case, decision science provides a systematic approach to understanding and improving the decision. At each stage it takes advantage of whatever evidence and expertise is available. The analysis stage integrates the knowledge to predict the risks and benefits of different choices. The description stage uses results from the social or behavioral sciences to predict how people will, in fact, choose. The solution stage adds knowledge of institutional mechanisms (markets, insurance, regulations) to design ways to improve choices.

Like any applied science, decision aiding begins with hypotheses and then proceeds to test them out, analyzing risks and benefits, studying intuitive decision-making, and designing and evaluating possible solutions. Organizations concerned with sustainability must find and coordinate the people needed to do such work. Popular accounts will help them understand the issues. However, they owe it to themselves and the problems to get professional help so their complex, applied problems get the same quality attention devoted to the simplified problems that scientists study. Decision science is the engineering science for decision-making. It combines behavioral research and decision analysis informed by expert knowledge of risks and benefits, aided by designers and practitioners in order to create and implement the best solutions to sustainability problems.

TOP