Defining Brilliant Mistakes

The following framework can assist you in distinguishing different types of mistakes using a simple cost-benefit analysis. This allows us to group mistakes into four types, with some illustrative examples listed for each (also see the table below):

A.  Tragic (high cost, low benefit): driving a car into tree and being badly injured
B.  Serious (high cost, high benefit): getting a divorce; filing for bankruptcy
C.  Trivial (low cost, low benefit): getting a parking ticket; losing your wallet
D.  Brilliant (low cost, high benefit): Edward Lorenz discovering the butterfly effect


Clearly, tragic mistakes (A), which exact a high price with no personal benefit, are to be wholly avoided. Serious mistakes (B) also have a high cost, but with the potential of a high benefit. These mistakes can provide tremendous lessons and may be valuable in retrospect, but we need to be careful about inviting them into our lives. You shouldn’t get divorced to see what you will learn; you don’t drive your business into the ground to taste the lessons of failure. A third class, trivial mistakes (C), represent low cost and low reward. Examples of these are getting a parking ticket or missing a plane. The results are not tragic, but the lessons learned are inconsequential or obvious—along the lines of “put more money in the meter” or “leave for the airport a bit earlier.”   Truly brilliant mistakes (D) are those that offer high benefits at a relatively low cost because they open new portals of discovery.  Here are some examples:

Brilliant Mistakes

When Citibank first proposed extending credit card offers to college students, many considered this idea to be a big mistake. According to conventional wisdom, students were terrible credit risks—immature youngsters with no job or home, high debts, and limited or no credit history. Though it was clearly a mistake by traditional standards, Citibank reaped high rewards, both in terms of an increased customer base (many of these students evolved into valuable long-term customers) and the new insights derived from the experiment (namely many parents are quite willing to bail out student cardholders when they run into the red).

In the winter of 1961, Edward Lorenz, an academic meteorologist, had just completed a large round of simulations of a particular weather system and wanted to repeat the experiment over a longer time frame. Rather than waste valuable processing time, he used the printout from the first simulation and manually typed in final numbers from the results table. However, the continued second simulation diverged radically from Lorenz’s expectation. He puzzled about it for days. Then it struck him: he had entered numbers using a computer printout that rounded all numbers to three places of precision after the decimal, where the computer stores six digits of precision internally. This tiny rounding error in the initial conditions pushed the second simulation onto quite a different path. Although the original weather prediction program he had designed had failed in its intent, this mistake led Lorenz to a far more significant discovery: what’s now known as the “butterfly effect.” Lorenz has assumed an honored place in science as the father of chaos theory. He was awarded the 1991 Kyoto Prize for discovering “deterministic chaos.”

The Lorenz example illustrates the two prime ingredients of a brilliant mistake:

  1. something goes wrong far beyond the range of prior expectation; and
  2. new insights emerge whose benefits greatly exceed the mistake’s cost.

The brilliant part lies especially in condition (2), but also in recognizing that (1) is necessary for (2) to occur. Our actions should aim to increase the chance of (1) and (2) occurring together. If those two conditions are met, we are dealing with a brilliant mistake.

Following are some examples of companies actually making mistakes on purpose to challenge their deeper assumptions and create new learning opportunities (against the better judgment of many smart people):

Examples of Deliberate Mistakes

Decision Strategies International (DSI) was determined to test its policy of not responding to Requests for Proposals (RFPs) that came in unsolicited. If DSI received an RFP and did not know anyone at the client organization, DSI assumed that they were price shopping or already had determined their favorite candidate. Nonetheless, DSI assigned recent hires, under senior supervision, to develop a proposal for the next RFP that came in. To DSI’s pleasant surprise, the unknown client (a regional electric utility) accepted the proposal and then hired DSI for other projects, amounting to more than $1 million in consulting fees.

Until 1984, US telephone companies had to provide service to every household in their region, no matter their credit history. The companies collected deposits from customers with the worst credit history, believing that they were most likely to damage equipment and/or not pay their bills. They tested their assumption by not charging the deposit for several months from these “bad” risks, and found that this customer segment had fewer delinquencies than some others customer segments, with less damage to equipment. The counterintuitive insight caused them to recalibrate their risk models and to charge deposits based on different criteria. The improved credit models added an average of $137 million to the bottom line every year for a decade.