Skip to main content
main content, press tab to continue
Article

Applying behavioral economics to innovation: Testing and assumptions

By Nathan Schneeberger, Phd. and Paige Seaborn | October 28, 2021

A two-part series exploring how innovators can use behavioral economics as a lens to examine their processes and projects.
Work Transformation|Employee Experience
N/A

In situations where emotions, ego and financial commitments run high, people tend to behave irrationally. This could mean an innovator becomes emotionally attached to a project, leading to less rational thinking. It can also create challenges when seeking stakeholder approval for an innovation initiative — especially one with a hefty price tag. The principles of behavioral economics provide a remedy by helping us better understand how people behave when faced with risk-reward economic decisions that are prevalent throughout innovation projects. By enabling us to step back to address irrational behavior and thinking, an understanding of behavioral economic principles can be the difference between the success or failure of an innovation initiative.

A brief introduction to behavioral economics

Interestingly, behavioral economics is an example of disruptive innovation. When it gained recognition in the 1970s and 1980s, it challenged the status quo, upsetting 40 years of economic theory. Early behavioral economists borrowed techniques and research from other social science disciplines and applied them to classical economic problems.

Prior to behavioral economics, most of Western economic theory was based on the underlying assumption that human beings were rational decision makers who made logical decisions that were in their best interest, and individuals themselves were best suited to maximizing their personal outcomes. Behavioral economics called into question this fundamental assumption. People, it turns out, are much more complex, as behavioral economists demonstrated through clever research.

Since the 1980s, there has been an abundance of research in both psychology and behavioral economics that shows people do not always behave as rational actors and are often influenced by a number of biases and heuristics depending on the situations they are in, how information is presented, and who is around them.

Work by Nobel Prize laurates Daniel Kahneman and Amos Tversky — who are often referred to as the fathers of behavioral economics — showed that people responded differently when the same outcome was framed as a gain or a loss. Other research has shown that external forces (e.g., how information is structured) and internal forces (e.g., cognitive biases and decision-making heuristics) work to influence the decisions that people make. Even emotional states (happy, sad, depressed, or angry) can influence the way people encode memories, process information, and make decisions. The behavioral economist Shlomo Benartzi, best known for his research on retirement savings and the Save More Tomorrow nudge, argues that even the colors a person sees on a screen can impact memory and decisions. Rationality may be the exception and not the rule.

Behavioral economics and innovation

The Willis Towers Watson approach to innovation borrows concepts and ideas from social scientific research. We take an approach that includes:

  1. Testing a hypothesis
  2. Validating or invalidating assumptions

We try to build a body of evidence to confirm our problem exists and that our solution prototype will meaningfully solve that problem in the eyes of our “customer” (i.e., whoever has a problem we are trying to solve).

Here's how behavioral economics comes into play with each approach:

  1. 01

    Testing hypotheses

    A common technique we use to test hypotheses with customers is to conduct problem validation and solution interviews. However, as noted above, social science and behavioral economic research tells us that how you present information to a person will influence what they think, how they respond and what decisions they make as a result. We need to ask ourselves questions such as:

    • Is my interview gathering useful information or is my line of questioning a self-fulfilling prophecy?
    • Are the characteristics of my questions leading interviewees to the answers I want to hear?

    Writing good questions that are balanced in language and encourage conversation is key to limiting bias and gathering useful information.

    And, of equal importance to the content of the question is who is being asked. The representativeness of your sample will determine how generalizable your results are across your theoretical population of stakeholders who experience the problem and potential users or buyers of the solution. It’s important to carefully identify whom you interview. It’s not that they have to be free of bias or perspective, but that we need to select that bias and perspective in such a way, so it is representative of your entire pool of customers.

  2. 02

    Validating or invalidating assumptions

    Another idea that innovators can borrow from social science research is the concept of validity, which can be thought of as the strength of the inferences and conclusions that we draw from the data we collect.

    In innovation, we talk about solutions, problems, the solution-problem interaction, and assumptions that we make about all three. The validity of those assumptions is our estimate of how strong they are, and how likely they are to be true.

    For example, we tend to have preconceived notions that our solution is a huge breakthrough idea or a small incremental improvement to a process — these are our assumptions, which are often influenced by personal bias and attachment to the solution. Validation of the solution concept, however, must come from the evidence we collect to support those notions. We also need to collect evidence about our problem. Some assumptions about our problem that would be critical to test are:

    • Do others (e.g., clients, customers, individual consumers) experience this same problem or is the problem limited to a few?
    • Is the problem of sufficient magnitude to motivate them to change their behaviors or (more importantly from a business perspective) spend money to solve it?

    Validity is not, therefore, an end state but a process of accumulating evidence to support our assumptions and inferences.

Failing fast

Where applying science to innovation gets tricky is the understanding that we are not going to be able to prove that anything is true. Science gave up on the idea that we could prove something true in the 1930s, and instead began to focus the bulk of scientific inquiry on disproving theories. When innovators talk about “failing fast” they are focused on this idea. We could spend a lot of time accumulating supporting evidence to demonstrate that an assumption we’re making is valid, when it would be more efficient to find a single piece of contrary evidence that would show some or all of the assumption is false.

If we can demonstrate our problem is less significant than we thought, or has less potential for revenue generation, we can change direction, temper the investment into our solution, and avoid going down paths that ultimately lead nowhere. This, however, is the ideal case scenario; most innovation projects must account for stakeholders with varying levels of interest in their own objectivity when faced with evidence that is contrary to their expectations or opinions.

To learn how to manage personal biases and stakeholders, see the second part of the series.

Authors

Director – Research
email Email

Senior Associate – Corporate Innovation
email Email

Related content tags, list of links Article Future of Work Employee Experience
Contact us