August 13, 2020

Quantifying Belief: How to Make Better Decisions

Topics

    Industry

      When Andrew Mason, founder of Groupon, wanted to improve his email conversion metrics, he turned to data analysis. His team tested the impact of sending two emails per day instead of one, and found that, while more double-emailed customers tended to unsubscribe, the ones who stayed generated more revenue. Ignoring his intuition, he had his team switch to the two-a-day model.

      This was not a good decision. Whereas there is no doubt that the Groupon data scientists achieved statistically significant results, they failed to consider the long-term effects of the change. Groupon became little more than a “marketplace of coupons,” Mason admits, eventually burning through the revenue potential of their dwindling market.

      From this example, it would be easy to conclude that data-driven decision making is more trouble than it’s worth.

      Putting your faith in statistics and modeling alone can be quite risky, as data-driven analytics and insights are very prone to “crimes” of malpractice and misuse. But equally, an over-reliance on intuition can lead to suboptimal decision making, especially within teams.

      When data is expensive or difficult to source, many companies use qualitative frameworks to synthesize ideas and opinions. While these can help simplify incredibly complex information, they often rely too heavily on intuition and fail to immunize teams from the perils of human bias.

      And we humans have a lot of biases to be wary of. Confirmation bias, availability bias, or representativeness bias, to name a few, can help us simplify decision making based on past experience — but this can often mean that we judge incorrectly. All these biases compound with other social biases in group decision making, creating minefields of judgement errors for project teams.

      So how might we balance intuition and data to make better group decisions? At Method we have a method. We call it Fact-based Hypothesis Testing, and it’s a way to help us make better decisions from qual and quant evidence and remove the bias that occurs in these decisions. When evidence for a particular hypothesis is mainly subjective or subjective and qualitative, Fact-Based Hypothesis Testing can make rigorous statistical analysis possible. This is achieved by asking team members questions about how artifacts, evidence and data acquired during the project affect the likelihood of each hypothesis being true. These answers are then analyzed and combined using Bayesian statistics. The system creates an audit trail of how the group considered evidence during the course of the project and how the group’s opinions change through the course of a project.

      To illustrate how the system works, consider the development of an energy usage app called EcoWatch. Your team is trying to determine if the product is desirable to 25- to 34-year-old first-time homeowners by evaluating a number of pieces of evidence. You could frame the project as a test of two hypotheses:

      “Hypothesis A: EcoWatch is desirable to 25–34 year old first-time homeowners”
      “Hypothesis B: EcoWatch is not desirable to 25–34 year old first-time homeowners”

      First, you would define your team’s “prior probability of a hypothesis.” Ask each team member to evaluate how likely they believe each hypothesis is to be true in qualitative terms (on a scale of impossible to extremely likely). The system then converts the designer’s evaluations into probabilities which are combined to produce a group likelihood of a hypothesis being true before evaluating evidence.

      Then you would evaluate the evidence from the project — in this case, evidence may look like the results of a survey or the synthesis of a user test. For each piece of evidence, the system asks two questions:

      “If Hypothesis A were 100 percent true, how likely is it that you would see this evidence?”

      “If Hypothesis B were 100 percent false, how likely is it that you would see this evidence?”

      If I were a team member, I may be inclined to say that the answer to the first question is “likely” and the second is “unlikely.” Using the theory of Bayesian statistics, we can combine all the team member’s answers to produce a group answer that fairly represents the group’s collective beliefs. This process continues as new evidence emerges or is added to the system, creating an audit trail of the likelihood of each hypothesis over time. By the end of the project, not only do we have the group’s preferred conclusion but also a rigorous and systematic way of understanding how the team arrived at its decision.

      The Fact-Based Hypothesis Testing framework has four key features:

      1. Independent: each person evaluates the relevance of evidence independently from all other designers, helping to mitigate the effect of team groupthink.
      2. Anonymous: each person’s answers are kept secret from the rest of the group, meaning there can be no finger-pointing if a person’s opinion dissents from the group.
      3. Rigorous: the team’s answers are combined using a statistical procedure that avoids some of the pitfalls of simple aggregation techniques such as pooling or averaging.
      4. Calibrated: if the team leader believes a systematic bias could be at play in the group’s decision making, they can create fake evidence that, if true, would strongly confirm one hypothesis over all others. If the team members don’t evaluate this evidence in an appropriate fashion, the team leader can highlight the discrepancy to their team and address the bias.

      If Andrew Mason of Groupon had evaluated his email decision with Fact-based Hypothesis Testing, he may have found that keeping with one email made sense to decrease customer churn. He would have been able to balance the data against his intuition, without feeling the need to choose one over the other. And he could have brought that decision to his shareholders with an audit trail, giving them a window into what hypotheses were considered, what evidence was evaluated, and how his team’s opinion of the hypotheses changed over time.

      * * *

      This article was written by Stuart George and edited by Erin Peace. Illustration by Claire Lorman. To learn more about our process, or understand how your teams might use Fact-based Hypothesis Testing, please get in touch.