Feeds:
Posts
Comments

Archive for June, 2011

Michael Mauboussin appeared Friday on Consuelo Mack’s WealthTrack to discuss several of the ideas in his excellent book, Think Twice. Particularly compelling is his story about Triple Crown prospect Big Brown and the advantage of the “outside view” – the statistical one – over the “inside view” – the specific, anecdotal one (excerpted from the book):

June 7, 2008 was a steamy day in New York, but that didn’t stop fans from stuffing the seats at Belmont Park to see Big Brown’s bid for horseracing’s pinnacle, the Triple Crown. The undefeated colt had been impressive. He won the first leg of the Triple Crown, the Kentucky Derby, by 4 ¾ lengths and cruised to a 5 ¼-length win in the second leg, the Preakness.

Oozing with confidence, Big Brown’s trainer, Rick Dutrow, suggested that it was a “foregone conclusion” that his horse would take the prize. Dutrow was emboldened by the horse’s performance, demeanor, and even the good “karma” in the barn. Despite the fact that no horse had won the Triple Crown in over 30 years, the handicappers shared Dutrow’s enthusiasm, putting 3-to-10 odds—almost a 77 percent probability—on his winning.

The fans came out to see Big Brown make history. And make history he did—it just wasn’t what everyone expected. Big Brown was the first Triple Crown contender to finish dead last.

The story of Big Brown is a good example of a common mistake in decision making: psychologists call it using the “inside” instead of the “outside” view.

The inside view considers a problem by focusing on the specific task and by using information that is close at hand. It’s the natural way our minds work. The outside view, by contrast, asks if there are similar situations that can provide a statistical basis for making a decision. The outside view wants to know if others have faced comparable problems, and if so, what happened. It’s an unnatural way to think because it forces people to set aside the information they have gathered.

Dutrow and others were bullish on Big Brown given what they had seen. But the outside view demands to know what happened to horses that had been in Big Brown’s position previously. It turns out that 11 of the 29 had succeeded in their Triple Crown bid in the prior 130 years, about a 40 percent success rate. But scratching the surface of the data revealed an important dichotomy. Before 1950, 8 of the 9 horses that had tried to win the Triple Crown did so. But since 1950, only 3 of 20 succeeded, a measly 15 percent success rate. Further, when compared to the other six recent Triple Crown aspirants, Big Brown was by far the slowest. A careful review of the outside view suggested that Big Brown’s odds were a lot longer than what the tote board suggested. A favorite to win the race? Yes. A better than three-in-four chance? Bad bet.

Mauboussin on WealthTrack:

Hat Tip Abnormal Returns.

 

 

Advertisement

Read Full Post »

The excellent Empirical Finance Blog has a superb series of posts on an investment strategy called “Profit and Value” (How “Magic” is the Magic Formula? and The Other Side of Value), which Wes describes as the “academic version” of Joel Greenblatt’s “Magic Formula.” (Incidentally, Greenblatt is speaking at the New York Value Investors Congress in October this year. I think seeing Greenblatt alone is worth the price of admission.) The Profit and Value approach is similar to the Magic Formula in that it ranks stocks independently on “value” and “quality,” and then reranks on the combined rankings. The stock with the lowest combined ranking is the most attractive, the stock with the next lowest combined ranking the next most attractive, and so on.

The Profit and Value strategy differs from the Magic Formula strategy in its methods of determining value and quality. Profit and Value uses straight book-to-market to determine value, where the Magic Formula uses EBIT / TEV. And where the Magic Formula uses EBIT / (NPPE +net working capital) to determine quality, Profit and Value uses “Gross Profitability,” a metric described in a fascinating paper by Robert Novy-Marx called “The other side of value” (more on this later).

My prima facie attraction to the Profit and Value strategy was twofold: First, Profit and Value uses book-to-market as the measure of value. I have a long-standing bias for asset-based metrics over income-based ones, and for good reasons. (After examining the performance analysis of Profit and Value, however, I’ve made a permanent switch to another metric that I’ll discuss in more detail later.) Secondly, the back-tested returns to the strategy appear to be considerably higher than those for the Magic Formula. Here’s a chart from Empirical Finance comparing the back-tested returns to each strategy with a yearly rebalancing (click to enlarge):

Profit and Value is the clear slight winner. This is the obvious reason for preferring one strategy over another. It is not, however, the end of the story. There are some problems with the performance of Profit and Value, which I discuss in some detail later. Over the next few weeks I’ll post my full thoughts in a series of posts on the following headings, but, for now, here are the summaries. I welcome any feedback.

Determining “quality” using “gross profitability”

In a 2010 paper called “The other side of value: Good growth and the gross profitability premium,” author Robert Novy-Marks discusses his preference for “gross profitability” over other measures of performance like earnings, or free cash flow. The actual “Gross Profitability” factor Novy-Marx uses is as follows:

Gross Profitability = (Revenues – Cost of Goods Sold) / Total Assets

Novy-Marx’s rationale for preferring gross profitability is compelling. First, it makes sense:

Gross profits is the cleanest accounting measure of true economic profitability. The farther down the income statement one goes, the more polluted profitability measures become, and the less related they are to true economic profitability. For example, a firm that has both lower production costs and higher sales than its competitors is unambiguously more profitable. Even so, it can easily have lower earnings than its competitors. If the firm is quickly increasing its sales though aggressive advertising, or commissions to its sales force, these actions can, even if optimal, reduce its bottom line income below that of its less profitable competitors. Similarly, if the firm spends on research and development to further increase its production advantage, or invests in organizational capital that will help it maintain its competitive advantage, these actions result in lower current earnings. Moreover, capital expenditures that directly increase the scale of the firm’s operations further reduce its free cashflows relative to its competitors. These facts suggest constructing the empirical proxy for productivity using gross profits. Scaling by a book-based measure, instead of a market based measure, avoids hopelessly conflating the productivity proxy with book-to-market. I scale gross profits by book assets, not book equity, because gross profits are not reduced by interest payments and are thus independent of leverage.

Second, it works:

In a horse race between these three measures of productivity, gross profits-to-assets is the clear winner. Gross profits-to-assets has roughly the same power predicting the cross section of expected returns as book-to-market. It completely subsumes the earnings based measure, and has significantly more power than the measure based on free cash flows. Moreover, demeaning this variable dramatically increases its power. Gross profits-to-assets also predicts long run growth in earnings and free crashflow, which may help explain why it is useful in forecasting returns.

I think it’s interesting that gross profits-to-assets is as predictive as book-to-market. I can’t recall any other fundamental performance measure that is predictive at all, let alone as predictive as book-to-market (EBIT / (NPPE +net working capital) is not. Neither are gross margins, ROE, ROA, or five-year earnings gains). There are, however, some obvious problems with gross profitability as a stand alone metric. More later.

White knuckles: Profit and Value performance analysis

While Novy-Marx’s “Gross Profitability” factor seems to be predictive, in combination with the book-to-market value factor the results are very volatile. To the extent that an individual investor can ignore this volatility, the strategy will work very well. As an institutional strategy, however, Profit and Value is a widow-maker. The peak-to-trough drawdown on Profit and Value through the 2007-2009 credit crisis puts any professional money manager following the strategy out of business. Second, the strategy selects highly leveraged stocks, and one needs a bigger set of mangoes than I possess to blindly buy them. The second problem – the preference for highly leveraged stocks – contributes directly to the first problem – big drawdowns in a downturn because investors tend to vomit up highly leveraged stocks as the market falls. Also concerning is the likely performance of Profit and Value in an environment of rising interest rates. Given the negative rates that presently prevail, such an environment seems likely to manifest in the future. I look specifically at the performance of Profit and Value in an environment of rising interest rates.

A better metric than book-to-market

The performance issues with Profit and Value discussed above – the volatility and the preference for highly leveraged balance sheets – are problems with the book-to-market criterion. As Greenblatt points out in his “You can be a stockmarket genius” book, it is partially the leverage embedded in low book-to-market that contributes to the outperformance over the long term. In the short term, however, the leverage can be a problem. There are other problems with cheap book value. As I discussed in The Small Cap Paradox: A problem with LSV’s Contrarian Investment, Extrapolation, and Risk in practice, the low price-to-book decile is very small. James P. O’Shaughnessy discusses this issue in What works on Wall Street:

The glaring problem with this method, when used with the Compustat database, is that it’s virtually impossible to buy the stocks that account for the performance advantage of small capitalization strategies. Table 4-9 illustrates the problem. On December 31, 2003, approximately 8,178 stocks in the active Compustat database had both year-end prices and a number for common shares outstanding. If we sorted the database by decile, each decile would be made up of 818 stocks. As Table 4-9 shows, market capitalization doesn’t get past $150 million until you get to decile 6. The top market capitalization in the fourth decile is $61 million, a number far too small to allow widespread buying of those stocks.

A market capitalization of $2 million – the cheapest and best-performed decile – is uninvestable. This leads O’Shaughnessy to make the point that “micro-cap stock returns are an illusion”:

The only way to achieve these stellar returns is to invest only a few million dollars in over 2,000 stocks. Precious few investors can do that. The stocks are far too small for a mutual fund to buy and far too numerous for an individual to tackle. So there they sit, tantalizingly out of reach of nearly everyone. What’s more, even if you could spread $2,000,000 over 2,000 names, the bid–ask spread would eat you alive.

Even a small investor will struggle to buy enough stock in the 3rd or 4th deciles, which encompass stocks with market capitalizations below $26 million and $61 million respectively. These are not, therefore, institutional-grade strategies. Says O’Shaughnessy:

This presents an interesting paradox: Small-cap mutual funds justify their investments using academic research that shows small stocks outperforming large ones, yet the funds themselves cannot buy the stocks that provide the lion’s share of performance because of a lack of trading liquidity.

A review of the Morningstar Mutual Fund database proves this. On December 31, 2003, the median market capitalization of the 1,215 mutual funds in Morningstar’s all equity, small-cap category was $967 million. That’s right between decile 7 and 8 from the Compustat universe—hardly small.

I spent some time researching alternatives to book-to-market. As much as it pained me to do so, I’ve now abandoned book-to-market as my primary valuation metric. In fact I no longer use it all. I discuss these metrics, and their advantages over book in a later post.

Read Full Post »

The New York Value Investing Congress on October 17th and 18th this year has an unusually strong line-up. Joel Greenblatt, Bill Ackman, or James Chanos alone are worth the price of admission, but as-yet-unheralded investors like Michael Kao and Guy Gottfried will also likely impress as they did in Pasadena.

Discount: Register by June 29, 2011 and you’ll pay just $2,395.That’s a total savings $2,100 from the $4,495 others will pay later to attend!  Click here to save $2,100 off the usual price of admission by clicking here and using discount code: N11GB2.

Here’s the list of managers presenting:

  • Bill AckmanPershing Square
  • Leon CoopermanOmega Advisors
  • James ChanosKynikos Associates LP
  • Joel GreenblattGotham Capital
  • Guy GottfriedRational Investment Group
  • Michael KaoAkanthos Capital Management
  • Glenn TongueT2 Partners
  • Whitney TilsonT2 Partners

The discount for Greenbackd readers expires in seven days, so take advantage now. Click here to receive the discount.

Read Full Post »

%d bloggers like this: