First-level thinking is simplistic and superficial, and just about everyone can do it (a bad sign for anything involving an attempt at superiority). All the first-level thinker needs is an opinion about the future, as in “The outlook for the company is favorable, meaning the stock will go up.”
Second-level thinking is deep, complex, and convoluted. The second-level thinker takes a great many things into account:
What is the range of likely future outcomes?
What outcome do I think will occur?
What’s the probability I’m right?
What does the consensus think?
How does my expectation differ from the consensus?
How does the current price for the asset comport with the consensus view of the future, and with mine?
Is the consensus psychology that’s incorporated in the price too bullish or bearish?
What will happen to the asset’s price if the consensus turns out to be right, and what if I’m right?
The difference in workload between first-level and second-level thinking is clearly massive, and the number of people capable of the latter is tiny compared to the number capable of the former.
Sam Savage says: If you’re about to play craps, do you waste your time trying to forecast the numbers on the individual dice or do you model that sevens are way more likely than twos or twelves when you roll two dice?
Doug Hubbard says: The only way to know if you have a better forecast is to quantify the reduction in uncertainty.
Doug Hubbard says: If you are doing it in your head, you ARE using a model. Calibrate your own estimates, then compare them to other modeling methods. You’ll be surprised how inconsistent you are without quantitative measurement.
Sam Savage says: Even if the model does not give you the right answer, the very act of modeling often leads to the right question.
We do not expect early-stage companies to produce free cash flow today. We do, however, want to encourage you to have a clear line of sight to producing free cash flow over time.
One way to do that is to think about the atomic units of investment for your business. The difference between an investment and an expense is that an expense yields no future benefit and investment yields a future benefit. At the early stages, the atomic unit may be the customer unit.
Hopefully that's a very clear investment, where the amount of money you put in to acquire and serve the customer is less than the amount of money you get back from the customer . . . and produces free cash at the company level over time.
If the investment world has had one constant over the past two decades, it has been the downward trend in interest rates. For years, the global risk of deflation in developed markets has generally outweighed the risk of rising prices, persuading central bankers around the world to keep the monetary spigots open, driving real interest rates to historic lows . . .
Consider the last 20 years of interest rate movements. While the 10-year Treasury yield has dropped from 6.0% in 2000 to around 1.5% today, the average leveraged buyout multiple has jumped to 12.3 times earnings before interest, taxes, depreciation, and amortization (EBITDA), from 6.8x in 2000— a 1.8-fold increase (see Figure 1).
But the inflation outlook has shifted dramatically in the wake of the Covid-19 pandemic and the war in Ukraine. Spikes in both producer and consumer prices suggest the era of easy money may be drawing to a close . . . If rising rates flatten multiples, where would that leave you in terms of generating differentiated returns?
To be an investor is to live constantly at the intersection of story and uncertainty. We build our mental models, frameworks, and processes to try to accurately price securities and overlay them with a story about the economic, geopolitical, and psychological factors that frame the backdrop to value them. We create data-driven stories to explain our differentiated view of a security that is out of balance within its sector or asset class to justify a variant perception that we think will generate alpha. Sometimes, however, investors might create a framework that seems sound only to discover that the method is actually no better than a system to “win” at Russian Roulette. The key, of course, is to change your framework when the environment changes. Even the most sophisticated quant investors employing hundreds of PhD mathematicians and physicists find that their models can fall short due to the ever-changing topography of the surface area of relevant data.
. . . I also try in all my forecasts to recognize that the world is extremely probabilistic. So what I said was that there was a ⅔ chance that we would have a real problem. It might be inflation. It might be a hard landing for stopping inflation. And I might turn out to have been wrong. As the year went on, I got increasingly confident that I had been right.
I think part of the problem of other people was they thought in terms of a single forecast. Part of what was wrong was people were prisoners of recent history. You know, economists do time-series statistics on the last 40 years of data. If for 40 years inflation has been stable, then whatever statistical procedure you use on stable inflation, you’re going to assume that nothing changes inflation very much because nothing ever did change inflation very much.
So I think that the secret, or a secret, to being right is maintaining a certain eclecticism of perspective, which helps you avoid overconfidence, helps you avoid being prisoner of the recent past, and leaves you open to a wide range of possibilities. I also think that it’s extraordinarily important to avoid motivated belief and to try to form judgments about what is, rather than what we prefer to be . . .
If your goal is to make more accurate probability judgments . . . you should either try to train perspective taking or recruit good perspective takers: people who are tolerant of clashing views (high dialectical complexity) and who are skilled at spotting comparison classes of historical precedents that put current situations in an outside-view context. Put differently, you should look for forecasters who are exceptions to two of the more robust generalizations of 20th century experimental psychology. Seek forecasters who don’t rush to reduce cognitive dissonance (Festinger, 1957) and who don’t jump to conclusions from vivid case-specific events (Kahneman, 2011).